00:00:00.001 Started by upstream project "autotest-per-patch" build number 126139 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.106 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.173 > git --version # 'git version 2.39.2' 00:00:00.173 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.202 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.202 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.769 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.781 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.792 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.792 > git config core.sparsecheckout # timeout=10 00:00:04.802 > git read-tree -mu HEAD # timeout=10 00:00:04.818 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.836 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.836 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.937 [Pipeline] Start of Pipeline 00:00:04.959 [Pipeline] library 00:00:04.961 Loading library shm_lib@master 00:00:04.961 Library shm_lib@master is cached. Copying from home. 00:00:04.975 [Pipeline] node 00:00:04.982 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.983 [Pipeline] { 00:00:04.996 [Pipeline] catchError 00:00:04.997 [Pipeline] { 00:00:05.007 [Pipeline] wrap 00:00:05.014 [Pipeline] { 00:00:05.023 [Pipeline] stage 00:00:05.024 [Pipeline] { (Prologue) 00:00:05.206 [Pipeline] sh 00:00:05.488 + logger -p user.info -t JENKINS-CI 00:00:05.504 [Pipeline] echo 00:00:05.505 Node: WFP8 00:00:05.512 [Pipeline] sh 00:00:05.808 [Pipeline] setCustomBuildProperty 00:00:05.818 [Pipeline] echo 00:00:05.819 Cleanup processes 00:00:05.823 [Pipeline] sh 00:00:06.103 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.103 3776272 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.116 [Pipeline] sh 00:00:06.399 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.399 ++ grep -v 'sudo pgrep' 00:00:06.399 ++ awk '{print $1}' 00:00:06.399 + sudo kill -9 00:00:06.399 + true 00:00:06.413 [Pipeline] cleanWs 00:00:06.419 [WS-CLEANUP] Deleting project workspace... 00:00:06.419 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.426 [WS-CLEANUP] done 00:00:06.430 [Pipeline] setCustomBuildProperty 00:00:06.441 [Pipeline] sh 00:00:06.729 + sudo git config --global --replace-all safe.directory '*' 00:00:06.788 [Pipeline] httpRequest 00:00:06.812 [Pipeline] echo 00:00:06.814 Sorcerer 10.211.164.101 is alive 00:00:06.823 [Pipeline] httpRequest 00:00:06.826 HttpMethod: GET 00:00:06.827 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.827 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.840 Response Code: HTTP/1.1 200 OK 00:00:06.840 Success: Status code 200 is in the accepted range: 200,404 00:00:06.841 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.708 [Pipeline] sh 00:00:10.989 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.007 [Pipeline] httpRequest 00:00:11.028 [Pipeline] echo 00:00:11.030 Sorcerer 10.211.164.101 is alive 00:00:11.039 [Pipeline] httpRequest 00:00:11.043 HttpMethod: GET 00:00:11.044 URL: http://10.211.164.101/packages/spdk_a0b7842f906ec70f6a0557136fd7e344a236aa97.tar.gz 00:00:11.045 Sending request to url: http://10.211.164.101/packages/spdk_a0b7842f906ec70f6a0557136fd7e344a236aa97.tar.gz 00:00:11.054 Response Code: HTTP/1.1 200 OK 00:00:11.055 Success: Status code 200 is in the accepted range: 200,404 00:00:11.055 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a0b7842f906ec70f6a0557136fd7e344a236aa97.tar.gz 00:00:57.165 [Pipeline] sh 00:00:57.454 + tar --no-same-owner -xf spdk_a0b7842f906ec70f6a0557136fd7e344a236aa97.tar.gz 00:01:00.000 [Pipeline] sh 00:01:00.283 + git -C spdk log --oneline -n5 00:01:00.283 a0b7842f9 util: rm auto size detect from SPDK_GET_FIELD 00:01:00.283 719d03c6a sock/uring: only register net impl if supported 00:01:00.283 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:00.283 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:00.283 6c7c1f57e accel: add sequence outstanding stat 00:01:00.296 [Pipeline] } 00:01:00.315 [Pipeline] // stage 00:01:00.327 [Pipeline] stage 00:01:00.330 [Pipeline] { (Prepare) 00:01:00.347 [Pipeline] writeFile 00:01:00.362 [Pipeline] sh 00:01:00.645 + logger -p user.info -t JENKINS-CI 00:01:00.657 [Pipeline] sh 00:01:00.939 + logger -p user.info -t JENKINS-CI 00:01:00.952 [Pipeline] sh 00:01:01.236 + cat autorun-spdk.conf 00:01:01.236 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.236 SPDK_TEST_NVMF=1 00:01:01.236 SPDK_TEST_NVME_CLI=1 00:01:01.236 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.236 SPDK_TEST_NVMF_NICS=e810 00:01:01.236 SPDK_TEST_VFIOUSER=1 00:01:01.236 SPDK_RUN_UBSAN=1 00:01:01.236 NET_TYPE=phy 00:01:01.244 RUN_NIGHTLY=0 00:01:01.249 [Pipeline] readFile 00:01:01.278 [Pipeline] withEnv 00:01:01.281 [Pipeline] { 00:01:01.297 [Pipeline] sh 00:01:01.581 + set -ex 00:01:01.581 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:01.581 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.581 ++ SPDK_TEST_NVMF=1 00:01:01.581 ++ SPDK_TEST_NVME_CLI=1 00:01:01.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.581 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.581 ++ SPDK_TEST_VFIOUSER=1 00:01:01.581 ++ SPDK_RUN_UBSAN=1 00:01:01.581 ++ NET_TYPE=phy 00:01:01.581 ++ RUN_NIGHTLY=0 00:01:01.581 + case $SPDK_TEST_NVMF_NICS in 00:01:01.581 + DRIVERS=ice 00:01:01.581 + [[ tcp == \r\d\m\a ]] 00:01:01.581 + [[ -n ice ]] 00:01:01.581 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:01.581 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:01.581 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:01.581 rmmod: ERROR: Module irdma is not currently loaded 00:01:01.581 rmmod: ERROR: Module i40iw is not currently loaded 00:01:01.581 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:01.581 + true 00:01:01.581 + for D in $DRIVERS 00:01:01.581 + sudo modprobe ice 00:01:01.581 + exit 0 00:01:01.591 [Pipeline] } 00:01:01.609 [Pipeline] // withEnv 00:01:01.615 [Pipeline] } 00:01:01.633 [Pipeline] // stage 00:01:01.645 [Pipeline] catchError 00:01:01.647 [Pipeline] { 00:01:01.666 [Pipeline] timeout 00:01:01.666 Timeout set to expire in 50 min 00:01:01.668 [Pipeline] { 00:01:01.687 [Pipeline] stage 00:01:01.689 [Pipeline] { (Tests) 00:01:01.707 [Pipeline] sh 00:01:01.990 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.990 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.990 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.990 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:01.990 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.990 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.990 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:01.990 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.990 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:01.990 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:01.990 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:01.990 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:01.990 + source /etc/os-release 00:01:01.990 ++ NAME='Fedora Linux' 00:01:01.990 ++ VERSION='38 (Cloud Edition)' 00:01:01.990 ++ ID=fedora 00:01:01.990 ++ VERSION_ID=38 00:01:01.990 ++ VERSION_CODENAME= 00:01:01.990 ++ PLATFORM_ID=platform:f38 00:01:01.990 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:01.990 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:01.990 ++ LOGO=fedora-logo-icon 00:01:01.990 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:01.990 ++ HOME_URL=https://fedoraproject.org/ 00:01:01.990 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:01.990 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:01.990 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:01.990 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:01.990 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:01.990 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:01.990 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:01.990 ++ SUPPORT_END=2024-05-14 00:01:01.990 ++ VARIANT='Cloud Edition' 00:01:01.990 ++ VARIANT_ID=cloud 00:01:01.990 + uname -a 00:01:01.990 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:01.990 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:04.526 Hugepages 00:01:04.526 node hugesize free / total 00:01:04.526 node0 1048576kB 0 / 0 00:01:04.526 node0 2048kB 0 / 0 00:01:04.526 node1 1048576kB 0 / 0 00:01:04.526 node1 2048kB 0 / 0 00:01:04.526 00:01:04.526 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.526 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:04.526 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:04.526 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:04.526 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:04.526 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:04.526 + rm -f /tmp/spdk-ld-path 00:01:04.526 + source autorun-spdk.conf 00:01:04.526 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.526 ++ SPDK_TEST_NVMF=1 00:01:04.526 ++ SPDK_TEST_NVME_CLI=1 00:01:04.526 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.526 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.526 ++ SPDK_TEST_VFIOUSER=1 00:01:04.526 ++ SPDK_RUN_UBSAN=1 00:01:04.526 ++ NET_TYPE=phy 00:01:04.526 ++ RUN_NIGHTLY=0 00:01:04.526 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.526 + [[ -n '' ]] 00:01:04.526 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.526 + for M in /var/spdk/build-*-manifest.txt 00:01:04.526 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.526 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.526 + for M in /var/spdk/build-*-manifest.txt 00:01:04.526 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.526 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.526 ++ uname 00:01:04.526 + [[ Linux == \L\i\n\u\x ]] 00:01:04.526 + sudo dmesg -T 00:01:04.526 + sudo dmesg --clear 00:01:04.526 + dmesg_pid=3777724 00:01:04.526 + [[ Fedora Linux == FreeBSD ]] 00:01:04.526 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.526 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.526 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.526 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.526 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.526 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.526 + sudo dmesg -Tw 00:01:04.526 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.526 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.526 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.526 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.526 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.526 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.526 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.526 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.526 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.526 Test configuration: 00:01:04.526 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.526 SPDK_TEST_NVMF=1 00:01:04.526 SPDK_TEST_NVME_CLI=1 00:01:04.526 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.526 SPDK_TEST_NVMF_NICS=e810 00:01:04.526 SPDK_TEST_VFIOUSER=1 00:01:04.526 SPDK_RUN_UBSAN=1 00:01:04.526 NET_TYPE=phy 00:01:04.526 RUN_NIGHTLY=0 17:09:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:04.526 17:09:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:04.526 17:09:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:04.526 17:09:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:04.526 17:09:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.526 17:09:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.526 17:09:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.526 17:09:23 -- paths/export.sh@5 -- $ export PATH 00:01:04.526 17:09:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.526 17:09:23 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:04.526 17:09:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:04.526 17:09:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720796963.XXXXXX 00:01:04.526 17:09:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720796963.FO0ETU 00:01:04.526 17:09:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:04.526 17:09:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:04.526 17:09:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:04.526 17:09:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:04.526 17:09:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:04.526 17:09:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:04.526 17:09:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:04.526 17:09:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.526 17:09:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:04.526 17:09:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:04.526 17:09:23 -- pm/common@17 -- $ local monitor 00:01:04.526 17:09:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.526 17:09:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.526 17:09:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.526 17:09:23 -- pm/common@21 -- $ date +%s 00:01:04.526 17:09:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.526 17:09:23 -- pm/common@21 -- $ date +%s 00:01:04.526 17:09:23 -- pm/common@25 -- $ sleep 1 00:01:04.526 17:09:23 -- pm/common@21 -- $ date +%s 00:01:04.526 17:09:23 -- pm/common@21 -- $ date +%s 00:01:04.784 17:09:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720796963 00:01:04.784 17:09:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720796963 00:01:04.784 17:09:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720796963 00:01:04.784 17:09:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720796963 00:01:04.784 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720796963_collect-vmstat.pm.log 00:01:04.784 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720796963_collect-cpu-load.pm.log 00:01:04.784 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720796963_collect-cpu-temp.pm.log 00:01:04.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720796963_collect-bmc-pm.bmc.pm.log 00:01:05.721 17:09:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:05.721 17:09:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.721 17:09:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.721 17:09:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.721 17:09:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.721 Fri Jul 12 03:09:24 PM UTC 2024 00:01:05.721 17:09:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.721 v24.09-pre-203-ga0b7842f9 00:01:05.721 17:09:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.721 17:09:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.721 17:09:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.721 17:09:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:05.721 17:09:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:05.721 17:09:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.721 ************************************ 00:01:05.721 START TEST ubsan 00:01:05.721 ************************************ 00:01:05.721 17:09:24 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:05.721 using ubsan 00:01:05.721 00:01:05.721 real 0m0.000s 00:01:05.721 user 0m0.000s 00:01:05.721 sys 0m0.000s 00:01:05.721 17:09:24 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:05.721 17:09:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.721 ************************************ 00:01:05.721 END TEST ubsan 00:01:05.721 ************************************ 00:01:05.721 17:09:24 -- common/autotest_common.sh@1142 -- $ return 0 00:01:05.721 17:09:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.721 17:09:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.721 17:09:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.721 17:09:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:06.008 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.008 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.277 Using 'verbs' RDMA provider 00:01:19.048 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:29.025 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:29.025 Creating mk/config.mk...done. 00:01:29.025 Creating mk/cc.flags.mk...done. 00:01:29.025 Type 'make' to build. 00:01:29.025 17:09:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:29.025 17:09:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:29.025 17:09:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:29.025 17:09:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.025 ************************************ 00:01:29.025 START TEST make 00:01:29.025 ************************************ 00:01:29.025 17:09:47 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:29.283 make[1]: Nothing to be done for 'all'. 00:01:30.666 The Meson build system 00:01:30.666 Version: 1.3.1 00:01:30.666 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:30.666 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.666 Build type: native build 00:01:30.666 Project name: libvfio-user 00:01:30.666 Project version: 0.0.1 00:01:30.666 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.666 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.666 Host machine cpu family: x86_64 00:01:30.666 Host machine cpu: x86_64 00:01:30.666 Run-time dependency threads found: YES 00:01:30.666 Library dl found: YES 00:01:30.666 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.666 Run-time dependency json-c found: YES 0.17 00:01:30.666 Run-time dependency cmocka found: YES 1.1.7 00:01:30.666 Program pytest-3 found: NO 00:01:30.666 Program flake8 found: NO 00:01:30.666 Program misspell-fixer found: NO 00:01:30.666 Program restructuredtext-lint found: NO 00:01:30.666 Program valgrind found: YES (/usr/bin/valgrind) 00:01:30.666 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.666 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.666 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.666 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.666 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:30.666 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:30.666 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.666 Build targets in project: 8 00:01:30.666 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:30.666 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:30.666 00:01:30.666 libvfio-user 0.0.1 00:01:30.666 00:01:30.666 User defined options 00:01:30.666 buildtype : debug 00:01:30.666 default_library: shared 00:01:30.666 libdir : /usr/local/lib 00:01:30.666 00:01:30.666 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.923 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.923 [1/37] Compiling C object samples/null.p/null.c.o 00:01:30.923 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.923 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.181 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.181 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.181 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.181 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.181 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:31.181 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.181 [10/37] Compiling C object samples/server.p/server.c.o 00:01:31.181 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.181 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.181 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.181 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.181 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.181 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.181 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.181 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.181 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:31.181 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.181 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.181 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.181 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.181 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.181 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.181 [26/37] Compiling C object samples/client.p/client.c.o 00:01:31.181 [27/37] Linking target samples/client 00:01:31.181 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.181 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.181 [30/37] Linking target test/unit_tests 00:01:31.181 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:31.439 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:31.439 [33/37] Linking target samples/null 00:01:31.439 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:31.439 [35/37] Linking target samples/gpio-pci-idio-16 00:01:31.439 [36/37] Linking target samples/server 00:01:31.439 [37/37] Linking target samples/lspci 00:01:31.439 INFO: autodetecting backend as ninja 00:01:31.439 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.439 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.697 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.697 ninja: no work to do. 00:01:36.967 The Meson build system 00:01:36.967 Version: 1.3.1 00:01:36.967 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:36.967 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:36.967 Build type: native build 00:01:36.967 Program cat found: YES (/usr/bin/cat) 00:01:36.967 Project name: DPDK 00:01:36.967 Project version: 24.03.0 00:01:36.967 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.967 C linker for the host machine: cc ld.bfd 2.39-16 00:01:36.967 Host machine cpu family: x86_64 00:01:36.967 Host machine cpu: x86_64 00:01:36.967 Message: ## Building in Developer Mode ## 00:01:36.967 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.967 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.967 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.967 Program python3 found: YES (/usr/bin/python3) 00:01:36.967 Program cat found: YES (/usr/bin/cat) 00:01:36.967 Compiler for C supports arguments -march=native: YES 00:01:36.967 Checking for size of "void *" : 8 00:01:36.967 Checking for size of "void *" : 8 (cached) 00:01:36.967 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:36.967 Library m found: YES 00:01:36.967 Library numa found: YES 00:01:36.967 Has header "numaif.h" : YES 00:01:36.967 Library fdt found: NO 00:01:36.967 Library execinfo found: NO 00:01:36.967 Has header "execinfo.h" : YES 00:01:36.967 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.967 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.967 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.967 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.967 Run-time dependency openssl found: YES 3.0.9 00:01:36.967 Run-time dependency libpcap found: YES 1.10.4 00:01:36.967 Has header "pcap.h" with dependency libpcap: YES 00:01:36.967 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.967 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.967 Compiler for C supports arguments -Wformat: YES 00:01:36.967 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.967 Compiler for C supports arguments -Wformat-security: NO 00:01:36.967 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.967 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.967 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.967 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.967 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.967 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.967 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.967 Compiler for C supports arguments -Wundef: YES 00:01:36.967 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.967 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.967 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.967 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.967 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.967 Program objdump found: YES (/usr/bin/objdump) 00:01:36.967 Compiler for C supports arguments -mavx512f: YES 00:01:36.967 Checking if "AVX512 checking" compiles: YES 00:01:36.967 Fetching value of define "__SSE4_2__" : 1 00:01:36.967 Fetching value of define "__AES__" : 1 00:01:36.967 Fetching value of define "__AVX__" : 1 00:01:36.967 Fetching value of define "__AVX2__" : 1 00:01:36.967 Fetching value of define "__AVX512BW__" : 1 00:01:36.967 Fetching value of define "__AVX512CD__" : 1 00:01:36.967 Fetching value of define "__AVX512DQ__" : 1 00:01:36.967 Fetching value of define "__AVX512F__" : 1 00:01:36.967 Fetching value of define "__AVX512VL__" : 1 00:01:36.967 Fetching value of define "__PCLMUL__" : 1 00:01:36.967 Fetching value of define "__RDRND__" : 1 00:01:36.967 Fetching value of define "__RDSEED__" : 1 00:01:36.967 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.967 Fetching value of define "__znver1__" : (undefined) 00:01:36.967 Fetching value of define "__znver2__" : (undefined) 00:01:36.967 Fetching value of define "__znver3__" : (undefined) 00:01:36.967 Fetching value of define "__znver4__" : (undefined) 00:01:36.967 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.967 Message: lib/log: Defining dependency "log" 00:01:36.967 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.967 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.967 Checking for function "getentropy" : NO 00:01:36.967 Message: lib/eal: Defining dependency "eal" 00:01:36.967 Message: lib/ring: Defining dependency "ring" 00:01:36.967 Message: lib/rcu: Defining dependency "rcu" 00:01:36.967 Message: lib/mempool: Defining dependency "mempool" 00:01:36.967 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.967 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.967 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.967 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.967 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.967 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.967 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:36.967 Compiler for C supports arguments -mpclmul: YES 00:01:36.967 Compiler for C supports arguments -maes: YES 00:01:36.967 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.967 Compiler for C supports arguments -mavx512bw: YES 00:01:36.967 Compiler for C supports arguments -mavx512dq: YES 00:01:36.967 Compiler for C supports arguments -mavx512vl: YES 00:01:36.967 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.967 Compiler for C supports arguments -mavx2: YES 00:01:36.967 Compiler for C supports arguments -mavx: YES 00:01:36.967 Message: lib/net: Defining dependency "net" 00:01:36.967 Message: lib/meter: Defining dependency "meter" 00:01:36.967 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.967 Message: lib/pci: Defining dependency "pci" 00:01:36.967 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.967 Message: lib/hash: Defining dependency "hash" 00:01:36.967 Message: lib/timer: Defining dependency "timer" 00:01:36.967 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.967 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.967 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.967 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.967 Message: lib/power: Defining dependency "power" 00:01:36.967 Message: lib/reorder: Defining dependency "reorder" 00:01:36.967 Message: lib/security: Defining dependency "security" 00:01:36.967 Has header "linux/userfaultfd.h" : YES 00:01:36.968 Has header "linux/vduse.h" : YES 00:01:36.968 Message: lib/vhost: Defining dependency "vhost" 00:01:36.968 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.968 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.968 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.968 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.968 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.968 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.968 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.968 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.968 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.968 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.968 Program doxygen found: YES (/usr/bin/doxygen) 00:01:36.968 Configuring doxy-api-html.conf using configuration 00:01:36.968 Configuring doxy-api-man.conf using configuration 00:01:36.968 Program mandb found: YES (/usr/bin/mandb) 00:01:36.968 Program sphinx-build found: NO 00:01:36.968 Configuring rte_build_config.h using configuration 00:01:36.968 Message: 00:01:36.968 ================= 00:01:36.968 Applications Enabled 00:01:36.968 ================= 00:01:36.968 00:01:36.968 apps: 00:01:36.968 00:01:36.968 00:01:36.968 Message: 00:01:36.968 ================= 00:01:36.968 Libraries Enabled 00:01:36.968 ================= 00:01:36.968 00:01:36.968 libs: 00:01:36.968 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.968 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.968 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.968 00:01:36.968 Message: 00:01:36.968 =============== 00:01:36.968 Drivers Enabled 00:01:36.968 =============== 00:01:36.968 00:01:36.968 common: 00:01:36.968 00:01:36.968 bus: 00:01:36.968 pci, vdev, 00:01:36.968 mempool: 00:01:36.968 ring, 00:01:36.968 dma: 00:01:36.968 00:01:36.968 net: 00:01:36.968 00:01:36.968 crypto: 00:01:36.968 00:01:36.968 compress: 00:01:36.968 00:01:36.968 vdpa: 00:01:36.968 00:01:36.968 00:01:36.968 Message: 00:01:36.968 ================= 00:01:36.968 Content Skipped 00:01:36.968 ================= 00:01:36.968 00:01:36.968 apps: 00:01:36.968 dumpcap: explicitly disabled via build config 00:01:36.968 graph: explicitly disabled via build config 00:01:36.968 pdump: explicitly disabled via build config 00:01:36.968 proc-info: explicitly disabled via build config 00:01:36.968 test-acl: explicitly disabled via build config 00:01:36.968 test-bbdev: explicitly disabled via build config 00:01:36.968 test-cmdline: explicitly disabled via build config 00:01:36.968 test-compress-perf: explicitly disabled via build config 00:01:36.968 test-crypto-perf: explicitly disabled via build config 00:01:36.968 test-dma-perf: explicitly disabled via build config 00:01:36.968 test-eventdev: explicitly disabled via build config 00:01:36.968 test-fib: explicitly disabled via build config 00:01:36.968 test-flow-perf: explicitly disabled via build config 00:01:36.968 test-gpudev: explicitly disabled via build config 00:01:36.968 test-mldev: explicitly disabled via build config 00:01:36.968 test-pipeline: explicitly disabled via build config 00:01:36.968 test-pmd: explicitly disabled via build config 00:01:36.968 test-regex: explicitly disabled via build config 00:01:36.968 test-sad: explicitly disabled via build config 00:01:36.968 test-security-perf: explicitly disabled via build config 00:01:36.968 00:01:36.968 libs: 00:01:36.968 argparse: explicitly disabled via build config 00:01:36.968 metrics: explicitly disabled via build config 00:01:36.968 acl: explicitly disabled via build config 00:01:36.968 bbdev: explicitly disabled via build config 00:01:36.968 bitratestats: explicitly disabled via build config 00:01:36.968 bpf: explicitly disabled via build config 00:01:36.968 cfgfile: explicitly disabled via build config 00:01:36.968 distributor: explicitly disabled via build config 00:01:36.968 efd: explicitly disabled via build config 00:01:36.968 eventdev: explicitly disabled via build config 00:01:36.968 dispatcher: explicitly disabled via build config 00:01:36.968 gpudev: explicitly disabled via build config 00:01:36.968 gro: explicitly disabled via build config 00:01:36.968 gso: explicitly disabled via build config 00:01:36.968 ip_frag: explicitly disabled via build config 00:01:36.968 jobstats: explicitly disabled via build config 00:01:36.968 latencystats: explicitly disabled via build config 00:01:36.968 lpm: explicitly disabled via build config 00:01:36.968 member: explicitly disabled via build config 00:01:36.968 pcapng: explicitly disabled via build config 00:01:36.968 rawdev: explicitly disabled via build config 00:01:36.968 regexdev: explicitly disabled via build config 00:01:36.968 mldev: explicitly disabled via build config 00:01:36.968 rib: explicitly disabled via build config 00:01:36.968 sched: explicitly disabled via build config 00:01:36.968 stack: explicitly disabled via build config 00:01:36.968 ipsec: explicitly disabled via build config 00:01:36.968 pdcp: explicitly disabled via build config 00:01:36.968 fib: explicitly disabled via build config 00:01:36.968 port: explicitly disabled via build config 00:01:36.968 pdump: explicitly disabled via build config 00:01:36.968 table: explicitly disabled via build config 00:01:36.968 pipeline: explicitly disabled via build config 00:01:36.968 graph: explicitly disabled via build config 00:01:36.968 node: explicitly disabled via build config 00:01:36.968 00:01:36.968 drivers: 00:01:36.968 common/cpt: not in enabled drivers build config 00:01:36.968 common/dpaax: not in enabled drivers build config 00:01:36.968 common/iavf: not in enabled drivers build config 00:01:36.968 common/idpf: not in enabled drivers build config 00:01:36.968 common/ionic: not in enabled drivers build config 00:01:36.968 common/mvep: not in enabled drivers build config 00:01:36.968 common/octeontx: not in enabled drivers build config 00:01:36.968 bus/auxiliary: not in enabled drivers build config 00:01:36.968 bus/cdx: not in enabled drivers build config 00:01:36.968 bus/dpaa: not in enabled drivers build config 00:01:36.968 bus/fslmc: not in enabled drivers build config 00:01:36.968 bus/ifpga: not in enabled drivers build config 00:01:36.968 bus/platform: not in enabled drivers build config 00:01:36.968 bus/uacce: not in enabled drivers build config 00:01:36.968 bus/vmbus: not in enabled drivers build config 00:01:36.968 common/cnxk: not in enabled drivers build config 00:01:36.968 common/mlx5: not in enabled drivers build config 00:01:36.968 common/nfp: not in enabled drivers build config 00:01:36.968 common/nitrox: not in enabled drivers build config 00:01:36.968 common/qat: not in enabled drivers build config 00:01:36.968 common/sfc_efx: not in enabled drivers build config 00:01:36.968 mempool/bucket: not in enabled drivers build config 00:01:36.968 mempool/cnxk: not in enabled drivers build config 00:01:36.968 mempool/dpaa: not in enabled drivers build config 00:01:36.968 mempool/dpaa2: not in enabled drivers build config 00:01:36.968 mempool/octeontx: not in enabled drivers build config 00:01:36.968 mempool/stack: not in enabled drivers build config 00:01:36.968 dma/cnxk: not in enabled drivers build config 00:01:36.968 dma/dpaa: not in enabled drivers build config 00:01:36.968 dma/dpaa2: not in enabled drivers build config 00:01:36.968 dma/hisilicon: not in enabled drivers build config 00:01:36.968 dma/idxd: not in enabled drivers build config 00:01:36.968 dma/ioat: not in enabled drivers build config 00:01:36.968 dma/skeleton: not in enabled drivers build config 00:01:36.968 net/af_packet: not in enabled drivers build config 00:01:36.968 net/af_xdp: not in enabled drivers build config 00:01:36.968 net/ark: not in enabled drivers build config 00:01:36.968 net/atlantic: not in enabled drivers build config 00:01:36.968 net/avp: not in enabled drivers build config 00:01:36.968 net/axgbe: not in enabled drivers build config 00:01:36.968 net/bnx2x: not in enabled drivers build config 00:01:36.968 net/bnxt: not in enabled drivers build config 00:01:36.968 net/bonding: not in enabled drivers build config 00:01:36.968 net/cnxk: not in enabled drivers build config 00:01:36.968 net/cpfl: not in enabled drivers build config 00:01:36.968 net/cxgbe: not in enabled drivers build config 00:01:36.968 net/dpaa: not in enabled drivers build config 00:01:36.968 net/dpaa2: not in enabled drivers build config 00:01:36.968 net/e1000: not in enabled drivers build config 00:01:36.968 net/ena: not in enabled drivers build config 00:01:36.968 net/enetc: not in enabled drivers build config 00:01:36.968 net/enetfec: not in enabled drivers build config 00:01:36.968 net/enic: not in enabled drivers build config 00:01:36.968 net/failsafe: not in enabled drivers build config 00:01:36.968 net/fm10k: not in enabled drivers build config 00:01:36.968 net/gve: not in enabled drivers build config 00:01:36.968 net/hinic: not in enabled drivers build config 00:01:36.968 net/hns3: not in enabled drivers build config 00:01:36.968 net/i40e: not in enabled drivers build config 00:01:36.968 net/iavf: not in enabled drivers build config 00:01:36.968 net/ice: not in enabled drivers build config 00:01:36.968 net/idpf: not in enabled drivers build config 00:01:36.968 net/igc: not in enabled drivers build config 00:01:36.968 net/ionic: not in enabled drivers build config 00:01:36.968 net/ipn3ke: not in enabled drivers build config 00:01:36.968 net/ixgbe: not in enabled drivers build config 00:01:36.968 net/mana: not in enabled drivers build config 00:01:36.968 net/memif: not in enabled drivers build config 00:01:36.968 net/mlx4: not in enabled drivers build config 00:01:36.968 net/mlx5: not in enabled drivers build config 00:01:36.968 net/mvneta: not in enabled drivers build config 00:01:36.968 net/mvpp2: not in enabled drivers build config 00:01:36.968 net/netvsc: not in enabled drivers build config 00:01:36.968 net/nfb: not in enabled drivers build config 00:01:36.968 net/nfp: not in enabled drivers build config 00:01:36.968 net/ngbe: not in enabled drivers build config 00:01:36.968 net/null: not in enabled drivers build config 00:01:36.968 net/octeontx: not in enabled drivers build config 00:01:36.969 net/octeon_ep: not in enabled drivers build config 00:01:36.969 net/pcap: not in enabled drivers build config 00:01:36.969 net/pfe: not in enabled drivers build config 00:01:36.969 net/qede: not in enabled drivers build config 00:01:36.969 net/ring: not in enabled drivers build config 00:01:36.969 net/sfc: not in enabled drivers build config 00:01:36.969 net/softnic: not in enabled drivers build config 00:01:36.969 net/tap: not in enabled drivers build config 00:01:36.969 net/thunderx: not in enabled drivers build config 00:01:36.969 net/txgbe: not in enabled drivers build config 00:01:36.969 net/vdev_netvsc: not in enabled drivers build config 00:01:36.969 net/vhost: not in enabled drivers build config 00:01:36.969 net/virtio: not in enabled drivers build config 00:01:36.969 net/vmxnet3: not in enabled drivers build config 00:01:36.969 raw/*: missing internal dependency, "rawdev" 00:01:36.969 crypto/armv8: not in enabled drivers build config 00:01:36.969 crypto/bcmfs: not in enabled drivers build config 00:01:36.969 crypto/caam_jr: not in enabled drivers build config 00:01:36.969 crypto/ccp: not in enabled drivers build config 00:01:36.969 crypto/cnxk: not in enabled drivers build config 00:01:36.969 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.969 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.969 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.969 crypto/mlx5: not in enabled drivers build config 00:01:36.969 crypto/mvsam: not in enabled drivers build config 00:01:36.969 crypto/nitrox: not in enabled drivers build config 00:01:36.969 crypto/null: not in enabled drivers build config 00:01:36.969 crypto/octeontx: not in enabled drivers build config 00:01:36.969 crypto/openssl: not in enabled drivers build config 00:01:36.969 crypto/scheduler: not in enabled drivers build config 00:01:36.969 crypto/uadk: not in enabled drivers build config 00:01:36.969 crypto/virtio: not in enabled drivers build config 00:01:36.969 compress/isal: not in enabled drivers build config 00:01:36.969 compress/mlx5: not in enabled drivers build config 00:01:36.969 compress/nitrox: not in enabled drivers build config 00:01:36.969 compress/octeontx: not in enabled drivers build config 00:01:36.969 compress/zlib: not in enabled drivers build config 00:01:36.969 regex/*: missing internal dependency, "regexdev" 00:01:36.969 ml/*: missing internal dependency, "mldev" 00:01:36.969 vdpa/ifc: not in enabled drivers build config 00:01:36.969 vdpa/mlx5: not in enabled drivers build config 00:01:36.969 vdpa/nfp: not in enabled drivers build config 00:01:36.969 vdpa/sfc: not in enabled drivers build config 00:01:36.969 event/*: missing internal dependency, "eventdev" 00:01:36.969 baseband/*: missing internal dependency, "bbdev" 00:01:36.969 gpu/*: missing internal dependency, "gpudev" 00:01:36.969 00:01:36.969 00:01:36.969 Build targets in project: 85 00:01:36.969 00:01:36.969 DPDK 24.03.0 00:01:36.969 00:01:36.969 User defined options 00:01:36.969 buildtype : debug 00:01:36.969 default_library : shared 00:01:36.969 libdir : lib 00:01:36.969 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.969 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:36.969 c_link_args : 00:01:36.969 cpu_instruction_set: native 00:01:36.969 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:36.969 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:36.969 enable_docs : false 00:01:36.969 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:36.969 enable_kmods : false 00:01:36.969 max_lcores : 128 00:01:36.969 tests : false 00:01:36.969 00:01:36.969 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.247 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.247 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.247 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.247 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.507 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.507 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.507 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.507 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.507 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.507 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.507 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.507 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.507 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.507 [13/268] Linking static target lib/librte_kvargs.a 00:01:37.507 [14/268] Linking static target lib/librte_log.a 00:01:37.507 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.507 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.507 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.507 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.507 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.767 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.767 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.767 [22/268] Linking static target lib/librte_pci.a 00:01:37.767 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.767 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.767 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.767 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.767 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.767 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.767 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.767 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.767 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.767 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.767 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.767 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.767 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.767 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.767 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.767 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.767 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.767 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.767 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.767 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.767 [43/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.767 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.767 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.767 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.767 [47/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.767 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.767 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.025 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.025 [51/268] Linking static target lib/librte_meter.a 00:01:38.025 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.025 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.025 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.025 [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.025 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.025 [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.025 [58/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.025 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.025 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:38.025 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.025 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.025 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.025 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.025 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.025 [66/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:38.025 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.025 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.025 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.025 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.025 [71/268] Linking static target lib/librte_ring.a 00:01:38.025 [72/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.025 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.025 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.025 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.025 [76/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.025 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.025 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.025 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.025 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.025 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.025 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.025 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.025 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.025 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.025 [86/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.025 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:38.025 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.025 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.025 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.025 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.025 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.025 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.025 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.025 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.025 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.025 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.025 [98/268] Linking static target lib/librte_telemetry.a 00:01:38.025 [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.025 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.025 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.025 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.025 [103/268] Linking static target lib/librte_mempool.a 00:01:38.025 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.025 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.025 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.026 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.026 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.026 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.026 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.026 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.026 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.026 [113/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.026 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.026 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.026 [116/268] Linking static target lib/librte_net.a 00:01:38.026 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.026 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.026 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.026 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.026 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.026 [122/268] Linking static target lib/librte_eal.a 00:01:38.026 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.026 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.285 [125/268] Linking static target lib/librte_rcu.a 00:01:38.285 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.285 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.285 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.285 [129/268] Linking static target lib/librte_cmdline.a 00:01:38.285 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.285 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.285 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.285 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.285 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.285 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.285 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.285 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.285 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.285 [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [142/268] Linking target lib/librte_log.so.24.1 00:01:38.285 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.285 [144/268] Linking static target lib/librte_mbuf.a 00:01:38.285 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.285 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.285 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.285 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.285 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.285 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.285 [152/268] Linking static target lib/librte_timer.a 00:01:38.285 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.285 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [155/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.285 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.543 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.543 [158/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.543 [159/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:38.543 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.543 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.543 [162/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.543 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.543 [164/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.543 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.543 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.543 [167/268] Linking target lib/librte_kvargs.so.24.1 00:01:38.543 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.543 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.543 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.543 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.543 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.543 [173/268] Linking target lib/librte_telemetry.so.24.1 00:01:38.543 [174/268] Linking static target lib/librte_dmadev.a 00:01:38.543 [175/268] Linking static target lib/librte_compressdev.a 00:01:38.543 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.543 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.543 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.543 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.543 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.543 [181/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.543 [182/268] Linking static target lib/librte_power.a 00:01:38.543 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.543 [184/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.543 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.543 [186/268] Linking static target lib/librte_security.a 00:01:38.543 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.543 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.543 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.543 [190/268] Linking static target lib/librte_reorder.a 00:01:38.543 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.543 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:38.543 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.543 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.543 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.543 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:38.543 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.543 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.543 [199/268] Linking static target drivers/librte_bus_vdev.a 00:01:38.543 [200/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.543 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.801 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.801 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.801 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.801 [205/268] Linking static target lib/librte_hash.a 00:01:38.801 [206/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.801 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.801 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.801 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.801 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.801 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.801 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:38.801 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.801 [214/268] Linking static target lib/librte_cryptodev.a 00:01:39.057 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.057 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.057 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.057 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.057 [219/268] Linking static target lib/librte_ethdev.a 00:01:39.057 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.057 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.057 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.314 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.314 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.314 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.573 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.506 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.506 [229/268] Linking static target lib/librte_vhost.a 00:01:40.763 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.135 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.435 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.435 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.692 [234/268] Linking target lib/librte_eal.so.24.1 00:01:47.692 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:47.692 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:47.692 [237/268] Linking target lib/librte_pci.so.24.1 00:01:47.692 [238/268] Linking target lib/librte_meter.so.24.1 00:01:47.692 [239/268] Linking target lib/librte_ring.so.24.1 00:01:47.692 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:47.692 [241/268] Linking target lib/librte_timer.so.24.1 00:01:47.950 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:47.950 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:47.950 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:47.950 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:47.950 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:47.950 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:47.950 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:47.950 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:47.950 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:47.950 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.208 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.208 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:48.208 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.208 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:48.208 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:48.208 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:48.208 [258/268] Linking target lib/librte_net.so.24.1 00:01:48.466 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.466 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.466 [261/268] Linking target lib/librte_security.so.24.1 00:01:48.466 [262/268] Linking target lib/librte_hash.so.24.1 00:01:48.466 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:48.466 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:48.466 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:48.466 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:48.724 [267/268] Linking target lib/librte_power.so.24.1 00:01:48.724 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:48.724 INFO: autodetecting backend as ninja 00:01:48.724 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:49.657 CC lib/ut/ut.o 00:01:49.657 CC lib/ut_mock/mock.o 00:01:49.657 CC lib/log/log.o 00:01:49.657 CC lib/log/log_deprecated.o 00:01:49.657 CC lib/log/log_flags.o 00:01:49.657 LIB libspdk_ut.a 00:01:49.915 LIB libspdk_log.a 00:01:49.915 SO libspdk_ut.so.2.0 00:01:49.915 LIB libspdk_ut_mock.a 00:01:49.915 SO libspdk_log.so.7.0 00:01:49.915 SO libspdk_ut_mock.so.6.0 00:01:49.915 SYMLINK libspdk_ut.so 00:01:49.915 SYMLINK libspdk_log.so 00:01:49.915 SYMLINK libspdk_ut_mock.so 00:01:50.172 CC lib/dma/dma.o 00:01:50.172 CC lib/ioat/ioat.o 00:01:50.172 CC lib/util/base64.o 00:01:50.172 CC lib/util/bit_array.o 00:01:50.172 CC lib/util/cpuset.o 00:01:50.172 CC lib/util/crc16.o 00:01:50.172 CC lib/util/crc32.o 00:01:50.172 CC lib/util/crc32c.o 00:01:50.172 CC lib/util/crc32_ieee.o 00:01:50.172 CC lib/util/crc64.o 00:01:50.172 CC lib/util/dif.o 00:01:50.172 CC lib/util/fd.o 00:01:50.172 CC lib/util/file.o 00:01:50.172 CC lib/util/hexlify.o 00:01:50.172 CC lib/util/pipe.o 00:01:50.172 CC lib/util/iov.o 00:01:50.172 CC lib/util/strerror_tls.o 00:01:50.172 CC lib/util/math.o 00:01:50.172 CC lib/util/uuid.o 00:01:50.172 CC lib/util/string.o 00:01:50.172 CC lib/util/fd_group.o 00:01:50.172 CC lib/util/xor.o 00:01:50.172 CC lib/util/zipf.o 00:01:50.172 CXX lib/trace_parser/trace.o 00:01:50.429 CC lib/vfio_user/host/vfio_user_pci.o 00:01:50.429 CC lib/vfio_user/host/vfio_user.o 00:01:50.429 LIB libspdk_dma.a 00:01:50.429 SO libspdk_dma.so.4.0 00:01:50.430 SYMLINK libspdk_dma.so 00:01:50.430 LIB libspdk_ioat.a 00:01:50.430 SO libspdk_ioat.so.7.0 00:01:50.430 SYMLINK libspdk_ioat.so 00:01:50.687 LIB libspdk_vfio_user.a 00:01:50.687 SO libspdk_vfio_user.so.5.0 00:01:50.687 LIB libspdk_util.a 00:01:50.687 SYMLINK libspdk_vfio_user.so 00:01:50.687 SO libspdk_util.so.9.1 00:01:50.945 SYMLINK libspdk_util.so 00:01:50.945 LIB libspdk_trace_parser.a 00:01:50.945 SO libspdk_trace_parser.so.5.0 00:01:50.945 SYMLINK libspdk_trace_parser.so 00:01:51.203 CC lib/vmd/vmd.o 00:01:51.203 CC lib/vmd/led.o 00:01:51.203 CC lib/rdma_utils/rdma_utils.o 00:01:51.203 CC lib/env_dpdk/env.o 00:01:51.203 CC lib/env_dpdk/pci.o 00:01:51.203 CC lib/idxd/idxd.o 00:01:51.203 CC lib/env_dpdk/memory.o 00:01:51.203 CC lib/idxd/idxd_user.o 00:01:51.203 CC lib/idxd/idxd_kernel.o 00:01:51.203 CC lib/env_dpdk/init.o 00:01:51.203 CC lib/env_dpdk/threads.o 00:01:51.203 CC lib/env_dpdk/pci_vmd.o 00:01:51.203 CC lib/env_dpdk/pci_ioat.o 00:01:51.203 CC lib/env_dpdk/pci_virtio.o 00:01:51.203 CC lib/rdma_provider/common.o 00:01:51.203 CC lib/json/json_parse.o 00:01:51.203 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:51.203 CC lib/json/json_util.o 00:01:51.203 CC lib/env_dpdk/pci_idxd.o 00:01:51.203 CC lib/json/json_write.o 00:01:51.203 CC lib/env_dpdk/pci_event.o 00:01:51.203 CC lib/env_dpdk/sigbus_handler.o 00:01:51.203 CC lib/env_dpdk/pci_dpdk.o 00:01:51.203 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:51.203 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.203 CC lib/conf/conf.o 00:01:51.203 LIB libspdk_rdma_provider.a 00:01:51.561 SO libspdk_rdma_provider.so.6.0 00:01:51.561 LIB libspdk_rdma_utils.a 00:01:51.561 LIB libspdk_conf.a 00:01:51.561 SO libspdk_rdma_utils.so.1.0 00:01:51.561 SO libspdk_conf.so.6.0 00:01:51.561 LIB libspdk_json.a 00:01:51.561 SYMLINK libspdk_rdma_provider.so 00:01:51.561 SO libspdk_json.so.6.0 00:01:51.561 SYMLINK libspdk_rdma_utils.so 00:01:51.561 SYMLINK libspdk_conf.so 00:01:51.561 SYMLINK libspdk_json.so 00:01:51.561 LIB libspdk_idxd.a 00:01:51.561 SO libspdk_idxd.so.12.0 00:01:51.561 LIB libspdk_vmd.a 00:01:51.561 SO libspdk_vmd.so.6.0 00:01:51.561 SYMLINK libspdk_idxd.so 00:01:51.819 SYMLINK libspdk_vmd.so 00:01:51.819 CC lib/jsonrpc/jsonrpc_server.o 00:01:51.819 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:51.819 CC lib/jsonrpc/jsonrpc_client.o 00:01:51.819 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:52.077 LIB libspdk_jsonrpc.a 00:01:52.077 SO libspdk_jsonrpc.so.6.0 00:01:52.077 SYMLINK libspdk_jsonrpc.so 00:01:52.077 LIB libspdk_env_dpdk.a 00:01:52.077 SO libspdk_env_dpdk.so.14.1 00:01:52.335 SYMLINK libspdk_env_dpdk.so 00:01:52.335 CC lib/rpc/rpc.o 00:01:52.592 LIB libspdk_rpc.a 00:01:52.592 SO libspdk_rpc.so.6.0 00:01:52.593 SYMLINK libspdk_rpc.so 00:01:52.850 CC lib/keyring/keyring.o 00:01:52.850 CC lib/keyring/keyring_rpc.o 00:01:52.850 CC lib/trace/trace.o 00:01:52.850 CC lib/trace/trace_flags.o 00:01:52.850 CC lib/trace/trace_rpc.o 00:01:52.850 CC lib/notify/notify.o 00:01:52.850 CC lib/notify/notify_rpc.o 00:01:53.107 LIB libspdk_notify.a 00:01:53.107 LIB libspdk_keyring.a 00:01:53.107 SO libspdk_notify.so.6.0 00:01:53.107 SO libspdk_keyring.so.1.0 00:01:53.107 LIB libspdk_trace.a 00:01:53.107 SO libspdk_trace.so.10.0 00:01:53.107 SYMLINK libspdk_notify.so 00:01:53.107 SYMLINK libspdk_keyring.so 00:01:53.107 SYMLINK libspdk_trace.so 00:01:53.365 CC lib/sock/sock.o 00:01:53.365 CC lib/sock/sock_rpc.o 00:01:53.623 CC lib/thread/thread.o 00:01:53.623 CC lib/thread/iobuf.o 00:01:53.880 LIB libspdk_sock.a 00:01:53.880 SO libspdk_sock.so.10.0 00:01:53.880 SYMLINK libspdk_sock.so 00:01:54.138 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:54.138 CC lib/nvme/nvme_ctrlr.o 00:01:54.138 CC lib/nvme/nvme_fabric.o 00:01:54.138 CC lib/nvme/nvme_ns_cmd.o 00:01:54.138 CC lib/nvme/nvme_ns.o 00:01:54.138 CC lib/nvme/nvme_pcie_common.o 00:01:54.138 CC lib/nvme/nvme_pcie.o 00:01:54.138 CC lib/nvme/nvme_qpair.o 00:01:54.138 CC lib/nvme/nvme.o 00:01:54.138 CC lib/nvme/nvme_quirks.o 00:01:54.138 CC lib/nvme/nvme_transport.o 00:01:54.138 CC lib/nvme/nvme_discovery.o 00:01:54.138 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:54.138 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:54.138 CC lib/nvme/nvme_opal.o 00:01:54.138 CC lib/nvme/nvme_tcp.o 00:01:54.138 CC lib/nvme/nvme_io_msg.o 00:01:54.138 CC lib/nvme/nvme_poll_group.o 00:01:54.138 CC lib/nvme/nvme_zns.o 00:01:54.138 CC lib/nvme/nvme_auth.o 00:01:54.138 CC lib/nvme/nvme_stubs.o 00:01:54.138 CC lib/nvme/nvme_cuse.o 00:01:54.138 CC lib/nvme/nvme_vfio_user.o 00:01:54.138 CC lib/nvme/nvme_rdma.o 00:01:54.703 LIB libspdk_thread.a 00:01:54.703 SO libspdk_thread.so.10.1 00:01:54.703 SYMLINK libspdk_thread.so 00:01:54.961 CC lib/accel/accel.o 00:01:54.961 CC lib/accel/accel_rpc.o 00:01:54.961 CC lib/accel/accel_sw.o 00:01:54.961 CC lib/init/json_config.o 00:01:54.961 CC lib/init/subsystem.o 00:01:54.961 CC lib/init/subsystem_rpc.o 00:01:54.961 CC lib/init/rpc.o 00:01:54.961 CC lib/virtio/virtio.o 00:01:54.961 CC lib/virtio/virtio_vhost_user.o 00:01:54.961 CC lib/virtio/virtio_vfio_user.o 00:01:54.961 CC lib/virtio/virtio_pci.o 00:01:54.961 CC lib/blob/blobstore.o 00:01:54.961 CC lib/blob/request.o 00:01:54.961 CC lib/blob/zeroes.o 00:01:54.961 CC lib/blob/blob_bs_dev.o 00:01:54.961 CC lib/vfu_tgt/tgt_endpoint.o 00:01:54.961 CC lib/vfu_tgt/tgt_rpc.o 00:01:55.219 LIB libspdk_init.a 00:01:55.219 SO libspdk_init.so.5.0 00:01:55.219 LIB libspdk_virtio.a 00:01:55.219 LIB libspdk_vfu_tgt.a 00:01:55.219 SYMLINK libspdk_init.so 00:01:55.219 SO libspdk_virtio.so.7.0 00:01:55.219 SO libspdk_vfu_tgt.so.3.0 00:01:55.219 SYMLINK libspdk_vfu_tgt.so 00:01:55.219 SYMLINK libspdk_virtio.so 00:01:55.477 CC lib/event/app.o 00:01:55.477 CC lib/event/reactor.o 00:01:55.477 CC lib/event/log_rpc.o 00:01:55.477 CC lib/event/app_rpc.o 00:01:55.477 CC lib/event/scheduler_static.o 00:01:55.735 LIB libspdk_accel.a 00:01:55.735 SO libspdk_accel.so.15.1 00:01:55.735 SYMLINK libspdk_accel.so 00:01:55.735 LIB libspdk_nvme.a 00:01:55.735 SO libspdk_nvme.so.13.1 00:01:55.735 LIB libspdk_event.a 00:01:55.992 SO libspdk_event.so.14.0 00:01:55.992 SYMLINK libspdk_event.so 00:01:55.992 CC lib/bdev/bdev.o 00:01:55.992 CC lib/bdev/bdev_rpc.o 00:01:55.992 CC lib/bdev/bdev_zone.o 00:01:55.992 CC lib/bdev/part.o 00:01:55.992 CC lib/bdev/scsi_nvme.o 00:01:55.992 SYMLINK libspdk_nvme.so 00:01:56.922 LIB libspdk_blob.a 00:01:57.180 SO libspdk_blob.so.11.0 00:01:57.180 SYMLINK libspdk_blob.so 00:01:57.438 CC lib/blobfs/blobfs.o 00:01:57.438 CC lib/blobfs/tree.o 00:01:57.438 CC lib/lvol/lvol.o 00:01:57.695 LIB libspdk_bdev.a 00:01:57.695 SO libspdk_bdev.so.15.1 00:01:57.952 SYMLINK libspdk_bdev.so 00:01:57.952 LIB libspdk_blobfs.a 00:01:57.952 SO libspdk_blobfs.so.10.0 00:01:57.952 LIB libspdk_lvol.a 00:01:58.210 SYMLINK libspdk_blobfs.so 00:01:58.210 SO libspdk_lvol.so.10.0 00:01:58.210 CC lib/nvmf/ctrlr.o 00:01:58.210 CC lib/nvmf/ctrlr_discovery.o 00:01:58.210 CC lib/ublk/ublk.o 00:01:58.210 CC lib/ublk/ublk_rpc.o 00:01:58.210 CC lib/nvmf/ctrlr_bdev.o 00:01:58.210 CC lib/nvmf/subsystem.o 00:01:58.210 CC lib/nvmf/nvmf.o 00:01:58.210 CC lib/nvmf/nvmf_rpc.o 00:01:58.210 SYMLINK libspdk_lvol.so 00:01:58.210 CC lib/ftl/ftl_init.o 00:01:58.210 CC lib/nvmf/transport.o 00:01:58.210 CC lib/ftl/ftl_core.o 00:01:58.210 CC lib/nbd/nbd.o 00:01:58.210 CC lib/nvmf/stubs.o 00:01:58.210 CC lib/nvmf/tcp.o 00:01:58.210 CC lib/ftl/ftl_layout.o 00:01:58.210 CC lib/nvmf/mdns_server.o 00:01:58.210 CC lib/scsi/dev.o 00:01:58.210 CC lib/ftl/ftl_debug.o 00:01:58.210 CC lib/nbd/nbd_rpc.o 00:01:58.210 CC lib/ftl/ftl_io.o 00:01:58.210 CC lib/nvmf/vfio_user.o 00:01:58.210 CC lib/scsi/lun.o 00:01:58.210 CC lib/ftl/ftl_l2p.o 00:01:58.210 CC lib/scsi/port.o 00:01:58.210 CC lib/ftl/ftl_sb.o 00:01:58.210 CC lib/nvmf/rdma.o 00:01:58.210 CC lib/scsi/scsi.o 00:01:58.210 CC lib/ftl/ftl_band.o 00:01:58.210 CC lib/ftl/ftl_l2p_flat.o 00:01:58.210 CC lib/nvmf/auth.o 00:01:58.210 CC lib/scsi/scsi_bdev.o 00:01:58.210 CC lib/scsi/scsi_pr.o 00:01:58.210 CC lib/ftl/ftl_nv_cache.o 00:01:58.210 CC lib/scsi/scsi_rpc.o 00:01:58.210 CC lib/scsi/task.o 00:01:58.210 CC lib/ftl/ftl_band_ops.o 00:01:58.210 CC lib/ftl/ftl_writer.o 00:01:58.210 CC lib/ftl/ftl_rq.o 00:01:58.210 CC lib/ftl/ftl_reloc.o 00:01:58.210 CC lib/ftl/ftl_l2p_cache.o 00:01:58.210 CC lib/ftl/ftl_p2l.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:58.210 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:58.210 CC lib/ftl/utils/ftl_conf.o 00:01:58.210 CC lib/ftl/utils/ftl_md.o 00:01:58.210 CC lib/ftl/utils/ftl_mempool.o 00:01:58.210 CC lib/ftl/utils/ftl_bitmap.o 00:01:58.210 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:58.210 CC lib/ftl/utils/ftl_property.o 00:01:58.210 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:58.210 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:58.210 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:58.210 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:58.210 CC lib/ftl/base/ftl_base_dev.o 00:01:58.210 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:58.210 CC lib/ftl/base/ftl_base_bdev.o 00:01:58.210 CC lib/ftl/ftl_trace.o 00:01:58.776 LIB libspdk_nbd.a 00:01:58.776 SO libspdk_nbd.so.7.0 00:01:58.776 SYMLINK libspdk_nbd.so 00:01:58.776 LIB libspdk_ublk.a 00:01:58.776 LIB libspdk_scsi.a 00:01:58.776 SO libspdk_ublk.so.3.0 00:01:58.776 SO libspdk_scsi.so.9.0 00:01:59.035 SYMLINK libspdk_ublk.so 00:01:59.035 SYMLINK libspdk_scsi.so 00:01:59.293 LIB libspdk_ftl.a 00:01:59.293 CC lib/vhost/vhost.o 00:01:59.293 CC lib/vhost/vhost_rpc.o 00:01:59.293 CC lib/iscsi/conn.o 00:01:59.293 CC lib/vhost/vhost_scsi.o 00:01:59.293 CC lib/vhost/vhost_blk.o 00:01:59.293 CC lib/iscsi/init_grp.o 00:01:59.293 CC lib/vhost/rte_vhost_user.o 00:01:59.293 CC lib/iscsi/iscsi.o 00:01:59.293 CC lib/iscsi/md5.o 00:01:59.293 CC lib/iscsi/param.o 00:01:59.293 CC lib/iscsi/portal_grp.o 00:01:59.293 CC lib/iscsi/tgt_node.o 00:01:59.293 SO libspdk_ftl.so.9.0 00:01:59.293 CC lib/iscsi/iscsi_subsystem.o 00:01:59.293 CC lib/iscsi/iscsi_rpc.o 00:01:59.293 CC lib/iscsi/task.o 00:01:59.551 SYMLINK libspdk_ftl.so 00:01:59.809 LIB libspdk_nvmf.a 00:01:59.809 SO libspdk_nvmf.so.18.1 00:02:00.068 SYMLINK libspdk_nvmf.so 00:02:00.068 LIB libspdk_vhost.a 00:02:00.068 SO libspdk_vhost.so.8.0 00:02:00.327 SYMLINK libspdk_vhost.so 00:02:00.327 LIB libspdk_iscsi.a 00:02:00.327 SO libspdk_iscsi.so.8.0 00:02:00.327 SYMLINK libspdk_iscsi.so 00:02:00.892 CC module/vfu_device/vfu_virtio_blk.o 00:02:00.892 CC module/vfu_device/vfu_virtio.o 00:02:00.892 CC module/vfu_device/vfu_virtio_scsi.o 00:02:00.892 CC module/vfu_device/vfu_virtio_rpc.o 00:02:00.892 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.154 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.154 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.154 CC module/keyring/linux/keyring.o 00:02:01.154 CC module/keyring/linux/keyring_rpc.o 00:02:01.154 CC module/keyring/file/keyring.o 00:02:01.154 CC module/keyring/file/keyring_rpc.o 00:02:01.154 CC module/blob/bdev/blob_bdev.o 00:02:01.154 CC module/sock/posix/posix.o 00:02:01.154 CC module/accel/dsa/accel_dsa_rpc.o 00:02:01.154 CC module/accel/dsa/accel_dsa.o 00:02:01.154 CC module/accel/iaa/accel_iaa.o 00:02:01.155 LIB libspdk_env_dpdk_rpc.a 00:02:01.155 CC module/accel/error/accel_error.o 00:02:01.155 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.155 CC module/accel/error/accel_error_rpc.o 00:02:01.155 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.155 CC module/accel/ioat/accel_ioat.o 00:02:01.155 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.155 SO libspdk_env_dpdk_rpc.so.6.0 00:02:01.155 SYMLINK libspdk_env_dpdk_rpc.so 00:02:01.155 LIB libspdk_scheduler_gscheduler.a 00:02:01.155 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.155 LIB libspdk_keyring_file.a 00:02:01.155 LIB libspdk_keyring_linux.a 00:02:01.155 SO libspdk_scheduler_gscheduler.so.4.0 00:02:01.155 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:01.155 SO libspdk_keyring_file.so.1.0 00:02:01.155 SO libspdk_keyring_linux.so.1.0 00:02:01.155 LIB libspdk_accel_error.a 00:02:01.411 SYMLINK libspdk_scheduler_gscheduler.so 00:02:01.411 LIB libspdk_accel_iaa.a 00:02:01.411 LIB libspdk_accel_ioat.a 00:02:01.411 LIB libspdk_scheduler_dynamic.a 00:02:01.411 SYMLINK libspdk_keyring_file.so 00:02:01.411 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:01.411 LIB libspdk_blob_bdev.a 00:02:01.411 SO libspdk_accel_error.so.2.0 00:02:01.411 SYMLINK libspdk_keyring_linux.so 00:02:01.411 LIB libspdk_accel_dsa.a 00:02:01.411 SO libspdk_scheduler_dynamic.so.4.0 00:02:01.411 SO libspdk_blob_bdev.so.11.0 00:02:01.411 SO libspdk_accel_ioat.so.6.0 00:02:01.411 SO libspdk_accel_iaa.so.3.0 00:02:01.411 SO libspdk_accel_dsa.so.5.0 00:02:01.411 SYMLINK libspdk_accel_error.so 00:02:01.412 SYMLINK libspdk_scheduler_dynamic.so 00:02:01.412 SYMLINK libspdk_blob_bdev.so 00:02:01.412 SYMLINK libspdk_accel_iaa.so 00:02:01.412 SYMLINK libspdk_accel_ioat.so 00:02:01.412 SYMLINK libspdk_accel_dsa.so 00:02:01.412 LIB libspdk_vfu_device.a 00:02:01.412 SO libspdk_vfu_device.so.3.0 00:02:01.668 SYMLINK libspdk_vfu_device.so 00:02:01.668 LIB libspdk_sock_posix.a 00:02:01.668 SO libspdk_sock_posix.so.6.0 00:02:01.668 SYMLINK libspdk_sock_posix.so 00:02:01.926 CC module/bdev/null/bdev_null.o 00:02:01.926 CC module/bdev/ftl/bdev_ftl.o 00:02:01.926 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:01.926 CC module/bdev/null/bdev_null_rpc.o 00:02:01.926 CC module/bdev/delay/vbdev_delay.o 00:02:01.926 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:01.926 CC module/bdev/raid/bdev_raid.o 00:02:01.926 CC module/bdev/raid/bdev_raid_rpc.o 00:02:01.926 CC module/bdev/raid/bdev_raid_sb.o 00:02:01.926 CC module/bdev/raid/raid0.o 00:02:01.926 CC module/bdev/gpt/gpt.o 00:02:01.926 CC module/bdev/raid/raid1.o 00:02:01.926 CC module/bdev/raid/concat.o 00:02:01.926 CC module/blobfs/bdev/blobfs_bdev.o 00:02:01.926 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:01.926 CC module/bdev/gpt/vbdev_gpt.o 00:02:01.926 CC module/bdev/nvme/bdev_nvme.o 00:02:01.926 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:01.926 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:01.926 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:01.926 CC module/bdev/nvme/nvme_rpc.o 00:02:01.926 CC module/bdev/nvme/vbdev_opal.o 00:02:01.926 CC module/bdev/passthru/vbdev_passthru.o 00:02:01.926 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:01.926 CC module/bdev/error/vbdev_error.o 00:02:01.926 CC module/bdev/malloc/bdev_malloc.o 00:02:01.926 CC module/bdev/nvme/bdev_mdns_client.o 00:02:01.926 CC module/bdev/error/vbdev_error_rpc.o 00:02:01.926 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:01.926 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:01.927 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:01.927 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:01.927 CC module/bdev/lvol/vbdev_lvol.o 00:02:01.927 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:01.927 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.927 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:01.927 CC module/bdev/iscsi/bdev_iscsi.o 00:02:01.927 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:01.927 CC module/bdev/aio/bdev_aio.o 00:02:01.927 CC module/bdev/split/vbdev_split.o 00:02:01.927 CC module/bdev/aio/bdev_aio_rpc.o 00:02:01.927 CC module/bdev/split/vbdev_split_rpc.o 00:02:02.184 LIB libspdk_blobfs_bdev.a 00:02:02.184 LIB libspdk_bdev_null.a 00:02:02.184 SO libspdk_blobfs_bdev.so.6.0 00:02:02.184 SO libspdk_bdev_null.so.6.0 00:02:02.184 LIB libspdk_bdev_error.a 00:02:02.184 LIB libspdk_bdev_split.a 00:02:02.184 LIB libspdk_bdev_gpt.a 00:02:02.184 SO libspdk_bdev_error.so.6.0 00:02:02.184 LIB libspdk_bdev_ftl.a 00:02:02.184 SYMLINK libspdk_blobfs_bdev.so 00:02:02.184 SO libspdk_bdev_split.so.6.0 00:02:02.184 SYMLINK libspdk_bdev_null.so 00:02:02.184 SO libspdk_bdev_gpt.so.6.0 00:02:02.184 SO libspdk_bdev_ftl.so.6.0 00:02:02.184 LIB libspdk_bdev_passthru.a 00:02:02.184 SYMLINK libspdk_bdev_error.so 00:02:02.184 LIB libspdk_bdev_aio.a 00:02:02.184 LIB libspdk_bdev_malloc.a 00:02:02.184 LIB libspdk_bdev_zone_block.a 00:02:02.184 SO libspdk_bdev_passthru.so.6.0 00:02:02.184 SYMLINK libspdk_bdev_split.so 00:02:02.184 SYMLINK libspdk_bdev_gpt.so 00:02:02.184 LIB libspdk_bdev_delay.a 00:02:02.184 SO libspdk_bdev_aio.so.6.0 00:02:02.184 SYMLINK libspdk_bdev_ftl.so 00:02:02.184 SO libspdk_bdev_zone_block.so.6.0 00:02:02.184 LIB libspdk_bdev_iscsi.a 00:02:02.184 SO libspdk_bdev_malloc.so.6.0 00:02:02.184 SO libspdk_bdev_delay.so.6.0 00:02:02.184 SYMLINK libspdk_bdev_passthru.so 00:02:02.184 SYMLINK libspdk_bdev_aio.so 00:02:02.184 SO libspdk_bdev_iscsi.so.6.0 00:02:02.441 SYMLINK libspdk_bdev_zone_block.so 00:02:02.441 LIB libspdk_bdev_lvol.a 00:02:02.441 SYMLINK libspdk_bdev_delay.so 00:02:02.441 SYMLINK libspdk_bdev_malloc.so 00:02:02.441 LIB libspdk_bdev_virtio.a 00:02:02.441 SO libspdk_bdev_lvol.so.6.0 00:02:02.441 SYMLINK libspdk_bdev_iscsi.so 00:02:02.441 SO libspdk_bdev_virtio.so.6.0 00:02:02.441 SYMLINK libspdk_bdev_lvol.so 00:02:02.441 SYMLINK libspdk_bdev_virtio.so 00:02:02.698 LIB libspdk_bdev_raid.a 00:02:02.699 SO libspdk_bdev_raid.so.6.0 00:02:02.699 SYMLINK libspdk_bdev_raid.so 00:02:03.631 LIB libspdk_bdev_nvme.a 00:02:03.631 SO libspdk_bdev_nvme.so.7.0 00:02:03.631 SYMLINK libspdk_bdev_nvme.so 00:02:04.197 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.197 CC module/event/subsystems/sock/sock.o 00:02:04.197 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.197 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:04.197 CC module/event/subsystems/keyring/keyring.o 00:02:04.197 CC module/event/subsystems/vmd/vmd.o 00:02:04.197 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.197 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.197 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.197 LIB libspdk_event_vhost_blk.a 00:02:04.197 LIB libspdk_event_scheduler.a 00:02:04.455 LIB libspdk_event_sock.a 00:02:04.455 LIB libspdk_event_keyring.a 00:02:04.455 LIB libspdk_event_vfu_tgt.a 00:02:04.455 SO libspdk_event_vhost_blk.so.3.0 00:02:04.455 SO libspdk_event_scheduler.so.4.0 00:02:04.455 LIB libspdk_event_vmd.a 00:02:04.455 SO libspdk_event_sock.so.5.0 00:02:04.455 LIB libspdk_event_iobuf.a 00:02:04.455 SO libspdk_event_keyring.so.1.0 00:02:04.455 SO libspdk_event_vfu_tgt.so.3.0 00:02:04.455 SO libspdk_event_iobuf.so.3.0 00:02:04.455 SO libspdk_event_vmd.so.6.0 00:02:04.455 SYMLINK libspdk_event_scheduler.so 00:02:04.455 SYMLINK libspdk_event_vhost_blk.so 00:02:04.455 SYMLINK libspdk_event_sock.so 00:02:04.455 SYMLINK libspdk_event_keyring.so 00:02:04.455 SYMLINK libspdk_event_vfu_tgt.so 00:02:04.455 SYMLINK libspdk_event_iobuf.so 00:02:04.455 SYMLINK libspdk_event_vmd.so 00:02:04.713 CC module/event/subsystems/accel/accel.o 00:02:04.970 LIB libspdk_event_accel.a 00:02:04.970 SO libspdk_event_accel.so.6.0 00:02:04.970 SYMLINK libspdk_event_accel.so 00:02:05.227 CC module/event/subsystems/bdev/bdev.o 00:02:05.484 LIB libspdk_event_bdev.a 00:02:05.484 SO libspdk_event_bdev.so.6.0 00:02:05.484 SYMLINK libspdk_event_bdev.so 00:02:05.742 CC module/event/subsystems/scsi/scsi.o 00:02:05.742 CC module/event/subsystems/ublk/ublk.o 00:02:05.742 CC module/event/subsystems/nbd/nbd.o 00:02:05.742 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.742 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.999 LIB libspdk_event_ublk.a 00:02:05.999 LIB libspdk_event_scsi.a 00:02:05.999 LIB libspdk_event_nbd.a 00:02:05.999 SO libspdk_event_scsi.so.6.0 00:02:05.999 SO libspdk_event_ublk.so.3.0 00:02:05.999 SO libspdk_event_nbd.so.6.0 00:02:05.999 SYMLINK libspdk_event_scsi.so 00:02:05.999 SYMLINK libspdk_event_ublk.so 00:02:05.999 LIB libspdk_event_nvmf.a 00:02:05.999 SYMLINK libspdk_event_nbd.so 00:02:05.999 SO libspdk_event_nvmf.so.6.0 00:02:06.257 SYMLINK libspdk_event_nvmf.so 00:02:06.257 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:06.257 CC module/event/subsystems/iscsi/iscsi.o 00:02:06.514 LIB libspdk_event_vhost_scsi.a 00:02:06.514 SO libspdk_event_vhost_scsi.so.3.0 00:02:06.514 LIB libspdk_event_iscsi.a 00:02:06.514 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.514 SO libspdk_event_iscsi.so.6.0 00:02:06.514 SYMLINK libspdk_event_iscsi.so 00:02:06.772 SO libspdk.so.6.0 00:02:06.772 SYMLINK libspdk.so 00:02:07.030 TEST_HEADER include/spdk/accel.h 00:02:07.030 CC test/rpc_client/rpc_client_test.o 00:02:07.030 TEST_HEADER include/spdk/assert.h 00:02:07.030 TEST_HEADER include/spdk/accel_module.h 00:02:07.030 TEST_HEADER include/spdk/barrier.h 00:02:07.030 TEST_HEADER include/spdk/base64.h 00:02:07.030 CC app/spdk_lspci/spdk_lspci.o 00:02:07.030 TEST_HEADER include/spdk/bdev_zone.h 00:02:07.030 TEST_HEADER include/spdk/bit_array.h 00:02:07.030 TEST_HEADER include/spdk/bdev.h 00:02:07.030 TEST_HEADER include/spdk/bdev_module.h 00:02:07.030 TEST_HEADER include/spdk/blob_bdev.h 00:02:07.030 TEST_HEADER include/spdk/bit_pool.h 00:02:07.030 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:07.030 TEST_HEADER include/spdk/blobfs.h 00:02:07.030 TEST_HEADER include/spdk/config.h 00:02:07.030 TEST_HEADER include/spdk/blob.h 00:02:07.030 CC app/spdk_nvme_discover/discovery_aer.o 00:02:07.030 TEST_HEADER include/spdk/conf.h 00:02:07.030 TEST_HEADER include/spdk/cpuset.h 00:02:07.030 TEST_HEADER include/spdk/crc32.h 00:02:07.030 TEST_HEADER include/spdk/crc16.h 00:02:07.030 TEST_HEADER include/spdk/crc64.h 00:02:07.030 TEST_HEADER include/spdk/dif.h 00:02:07.030 CC app/trace_record/trace_record.o 00:02:07.030 TEST_HEADER include/spdk/dma.h 00:02:07.030 TEST_HEADER include/spdk/endian.h 00:02:07.030 TEST_HEADER include/spdk/env_dpdk.h 00:02:07.030 CXX app/trace/trace.o 00:02:07.030 TEST_HEADER include/spdk/event.h 00:02:07.030 TEST_HEADER include/spdk/env.h 00:02:07.030 CC app/spdk_top/spdk_top.o 00:02:07.030 TEST_HEADER include/spdk/fd_group.h 00:02:07.030 TEST_HEADER include/spdk/file.h 00:02:07.030 TEST_HEADER include/spdk/fd.h 00:02:07.030 TEST_HEADER include/spdk/ftl.h 00:02:07.030 CC app/spdk_nvme_perf/perf.o 00:02:07.030 TEST_HEADER include/spdk/histogram_data.h 00:02:07.030 TEST_HEADER include/spdk/gpt_spec.h 00:02:07.030 TEST_HEADER include/spdk/hexlify.h 00:02:07.030 TEST_HEADER include/spdk/idxd.h 00:02:07.030 TEST_HEADER include/spdk/init.h 00:02:07.030 TEST_HEADER include/spdk/ioat_spec.h 00:02:07.030 TEST_HEADER include/spdk/idxd_spec.h 00:02:07.030 TEST_HEADER include/spdk/iscsi_spec.h 00:02:07.030 TEST_HEADER include/spdk/json.h 00:02:07.030 TEST_HEADER include/spdk/ioat.h 00:02:07.030 TEST_HEADER include/spdk/jsonrpc.h 00:02:07.030 TEST_HEADER include/spdk/keyring.h 00:02:07.030 TEST_HEADER include/spdk/keyring_module.h 00:02:07.030 TEST_HEADER include/spdk/log.h 00:02:07.030 TEST_HEADER include/spdk/likely.h 00:02:07.030 TEST_HEADER include/spdk/lvol.h 00:02:07.030 CC app/spdk_nvme_identify/identify.o 00:02:07.030 TEST_HEADER include/spdk/memory.h 00:02:07.030 TEST_HEADER include/spdk/nbd.h 00:02:07.030 TEST_HEADER include/spdk/notify.h 00:02:07.030 TEST_HEADER include/spdk/mmio.h 00:02:07.030 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:07.030 TEST_HEADER include/spdk/nvme.h 00:02:07.030 TEST_HEADER include/spdk/nvme_intel.h 00:02:07.030 TEST_HEADER include/spdk/nvme_spec.h 00:02:07.030 TEST_HEADER include/spdk/nvme_zns.h 00:02:07.030 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:07.030 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:07.030 TEST_HEADER include/spdk/nvmf.h 00:02:07.030 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:07.030 TEST_HEADER include/spdk/nvmf_spec.h 00:02:07.030 TEST_HEADER include/spdk/opal.h 00:02:07.030 TEST_HEADER include/spdk/nvmf_transport.h 00:02:07.030 TEST_HEADER include/spdk/opal_spec.h 00:02:07.030 TEST_HEADER include/spdk/pipe.h 00:02:07.030 TEST_HEADER include/spdk/pci_ids.h 00:02:07.030 TEST_HEADER include/spdk/reduce.h 00:02:07.030 TEST_HEADER include/spdk/queue.h 00:02:07.030 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:07.030 TEST_HEADER include/spdk/scheduler.h 00:02:07.030 TEST_HEADER include/spdk/rpc.h 00:02:07.030 TEST_HEADER include/spdk/scsi.h 00:02:07.030 TEST_HEADER include/spdk/scsi_spec.h 00:02:07.030 TEST_HEADER include/spdk/stdinc.h 00:02:07.030 TEST_HEADER include/spdk/sock.h 00:02:07.030 TEST_HEADER include/spdk/string.h 00:02:07.030 TEST_HEADER include/spdk/thread.h 00:02:07.030 CC app/spdk_dd/spdk_dd.o 00:02:07.030 TEST_HEADER include/spdk/trace.h 00:02:07.030 TEST_HEADER include/spdk/tree.h 00:02:07.030 TEST_HEADER include/spdk/trace_parser.h 00:02:07.030 TEST_HEADER include/spdk/ublk.h 00:02:07.030 TEST_HEADER include/spdk/util.h 00:02:07.030 TEST_HEADER include/spdk/version.h 00:02:07.030 TEST_HEADER include/spdk/uuid.h 00:02:07.030 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:07.030 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:07.030 TEST_HEADER include/spdk/vmd.h 00:02:07.030 TEST_HEADER include/spdk/xor.h 00:02:07.030 TEST_HEADER include/spdk/vhost.h 00:02:07.030 CXX test/cpp_headers/accel.o 00:02:07.030 TEST_HEADER include/spdk/zipf.h 00:02:07.030 CXX test/cpp_headers/accel_module.o 00:02:07.030 CXX test/cpp_headers/assert.o 00:02:07.030 CXX test/cpp_headers/barrier.o 00:02:07.030 CXX test/cpp_headers/base64.o 00:02:07.030 CXX test/cpp_headers/bdev.o 00:02:07.030 CC app/iscsi_tgt/iscsi_tgt.o 00:02:07.030 CXX test/cpp_headers/bdev_module.o 00:02:07.030 CXX test/cpp_headers/bit_array.o 00:02:07.030 CXX test/cpp_headers/bdev_zone.o 00:02:07.030 CXX test/cpp_headers/bit_pool.o 00:02:07.030 CXX test/cpp_headers/blob_bdev.o 00:02:07.030 CXX test/cpp_headers/blobfs.o 00:02:07.030 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.030 CXX test/cpp_headers/blob.o 00:02:07.030 CXX test/cpp_headers/conf.o 00:02:07.030 CXX test/cpp_headers/cpuset.o 00:02:07.030 CXX test/cpp_headers/config.o 00:02:07.030 CXX test/cpp_headers/crc16.o 00:02:07.030 CXX test/cpp_headers/crc32.o 00:02:07.299 CXX test/cpp_headers/crc64.o 00:02:07.299 CXX test/cpp_headers/dif.o 00:02:07.299 CXX test/cpp_headers/dma.o 00:02:07.299 CXX test/cpp_headers/env_dpdk.o 00:02:07.299 CXX test/cpp_headers/endian.o 00:02:07.299 CXX test/cpp_headers/env.o 00:02:07.299 CXX test/cpp_headers/fd_group.o 00:02:07.299 CXX test/cpp_headers/event.o 00:02:07.299 CXX test/cpp_headers/fd.o 00:02:07.299 CXX test/cpp_headers/ftl.o 00:02:07.299 CXX test/cpp_headers/file.o 00:02:07.299 CXX test/cpp_headers/gpt_spec.o 00:02:07.299 CXX test/cpp_headers/idxd.o 00:02:07.299 CXX test/cpp_headers/hexlify.o 00:02:07.299 CXX test/cpp_headers/histogram_data.o 00:02:07.299 CXX test/cpp_headers/idxd_spec.o 00:02:07.299 CXX test/cpp_headers/init.o 00:02:07.299 CXX test/cpp_headers/ioat.o 00:02:07.299 CXX test/cpp_headers/iscsi_spec.o 00:02:07.299 CC app/nvmf_tgt/nvmf_main.o 00:02:07.299 CXX test/cpp_headers/ioat_spec.o 00:02:07.299 CXX test/cpp_headers/json.o 00:02:07.299 CXX test/cpp_headers/jsonrpc.o 00:02:07.299 CXX test/cpp_headers/keyring_module.o 00:02:07.299 CXX test/cpp_headers/keyring.o 00:02:07.299 CXX test/cpp_headers/log.o 00:02:07.299 CXX test/cpp_headers/likely.o 00:02:07.299 CXX test/cpp_headers/lvol.o 00:02:07.299 CXX test/cpp_headers/memory.o 00:02:07.299 CXX test/cpp_headers/notify.o 00:02:07.299 CXX test/cpp_headers/nbd.o 00:02:07.299 CXX test/cpp_headers/mmio.o 00:02:07.299 CXX test/cpp_headers/nvme_intel.o 00:02:07.299 CXX test/cpp_headers/nvme.o 00:02:07.299 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.299 CXX test/cpp_headers/nvme_spec.o 00:02:07.299 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.299 CXX test/cpp_headers/nvme_zns.o 00:02:07.299 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.299 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.299 CXX test/cpp_headers/nvmf.o 00:02:07.299 CXX test/cpp_headers/nvmf_transport.o 00:02:07.299 CXX test/cpp_headers/nvmf_spec.o 00:02:07.299 CXX test/cpp_headers/opal.o 00:02:07.299 CXX test/cpp_headers/opal_spec.o 00:02:07.299 CXX test/cpp_headers/pci_ids.o 00:02:07.299 CXX test/cpp_headers/pipe.o 00:02:07.299 CXX test/cpp_headers/queue.o 00:02:07.299 CXX test/cpp_headers/reduce.o 00:02:07.299 CC test/app/histogram_perf/histogram_perf.o 00:02:07.299 CC app/spdk_tgt/spdk_tgt.o 00:02:07.299 CC test/thread/poller_perf/poller_perf.o 00:02:07.299 CC test/app/stub/stub.o 00:02:07.299 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.299 CC test/env/memory/memory_ut.o 00:02:07.299 CC test/app/jsoncat/jsoncat.o 00:02:07.299 CXX test/cpp_headers/rpc.o 00:02:07.299 CC test/app/bdev_svc/bdev_svc.o 00:02:07.299 CC test/env/pci/pci_ut.o 00:02:07.299 CC test/dma/test_dma/test_dma.o 00:02:07.299 CC test/env/vtophys/vtophys.o 00:02:07.299 CC examples/ioat/perf/perf.o 00:02:07.299 CC app/fio/nvme/fio_plugin.o 00:02:07.299 CC examples/util/zipf/zipf.o 00:02:07.299 CC examples/ioat/verify/verify.o 00:02:07.562 CC app/fio/bdev/fio_plugin.o 00:02:07.562 LINK spdk_lspci 00:02:07.562 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.562 LINK rpc_client_test 00:02:07.562 LINK spdk_nvme_discover 00:02:07.562 LINK interrupt_tgt 00:02:07.822 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.822 CC test/env/mem_callbacks/mem_callbacks.o 00:02:07.822 LINK jsoncat 00:02:07.822 CXX test/cpp_headers/scheduler.o 00:02:07.822 LINK iscsi_tgt 00:02:07.823 CXX test/cpp_headers/scsi.o 00:02:07.823 CXX test/cpp_headers/scsi_spec.o 00:02:07.823 CXX test/cpp_headers/sock.o 00:02:07.823 CXX test/cpp_headers/stdinc.o 00:02:07.823 CXX test/cpp_headers/string.o 00:02:07.823 CXX test/cpp_headers/thread.o 00:02:07.823 CXX test/cpp_headers/trace.o 00:02:07.823 CXX test/cpp_headers/tree.o 00:02:07.823 CXX test/cpp_headers/ublk.o 00:02:07.823 CXX test/cpp_headers/util.o 00:02:07.823 CXX test/cpp_headers/trace_parser.o 00:02:07.823 LINK vtophys 00:02:07.823 CXX test/cpp_headers/uuid.o 00:02:07.823 CXX test/cpp_headers/version.o 00:02:07.823 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.823 LINK env_dpdk_post_init 00:02:07.823 LINK poller_perf 00:02:07.823 LINK histogram_perf 00:02:07.823 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.823 CXX test/cpp_headers/vhost.o 00:02:07.823 CXX test/cpp_headers/vmd.o 00:02:07.823 CXX test/cpp_headers/xor.o 00:02:07.823 CXX test/cpp_headers/zipf.o 00:02:07.823 LINK zipf 00:02:07.823 LINK nvmf_tgt 00:02:07.823 LINK spdk_trace_record 00:02:07.823 LINK spdk_tgt 00:02:07.823 LINK stub 00:02:07.823 LINK bdev_svc 00:02:07.823 LINK spdk_dd 00:02:08.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:08.081 LINK ioat_perf 00:02:08.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:08.081 LINK verify 00:02:08.081 LINK test_dma 00:02:08.081 LINK spdk_trace 00:02:08.081 LINK pci_ut 00:02:08.081 LINK spdk_nvme 00:02:08.338 LINK nvme_fuzz 00:02:08.338 CC test/event/reactor/reactor.o 00:02:08.338 CC test/event/event_perf/event_perf.o 00:02:08.338 CC test/event/reactor_perf/reactor_perf.o 00:02:08.338 CC test/event/app_repeat/app_repeat.o 00:02:08.338 CC test/event/scheduler/scheduler.o 00:02:08.338 CC examples/sock/hello_world/hello_sock.o 00:02:08.338 LINK vhost_fuzz 00:02:08.338 LINK spdk_bdev 00:02:08.338 CC examples/idxd/perf/perf.o 00:02:08.338 CC examples/vmd/led/led.o 00:02:08.338 CC examples/vmd/lsvmd/lsvmd.o 00:02:08.338 CC examples/thread/thread/thread_ex.o 00:02:08.338 LINK mem_callbacks 00:02:08.338 LINK spdk_top 00:02:08.338 LINK spdk_nvme_identify 00:02:08.338 LINK event_perf 00:02:08.338 LINK reactor_perf 00:02:08.338 LINK spdk_nvme_perf 00:02:08.338 LINK reactor 00:02:08.338 CC app/vhost/vhost.o 00:02:08.596 LINK app_repeat 00:02:08.596 CC test/nvme/simple_copy/simple_copy.o 00:02:08.596 CC test/nvme/err_injection/err_injection.o 00:02:08.596 CC test/nvme/reset/reset.o 00:02:08.596 CC test/nvme/compliance/nvme_compliance.o 00:02:08.596 LINK lsvmd 00:02:08.596 LINK led 00:02:08.596 CC test/nvme/e2edp/nvme_dp.o 00:02:08.596 CC test/nvme/aer/aer.o 00:02:08.596 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:08.596 CC test/nvme/cuse/cuse.o 00:02:08.596 CC test/nvme/sgl/sgl.o 00:02:08.596 CC test/nvme/fused_ordering/fused_ordering.o 00:02:08.596 CC test/nvme/reserve/reserve.o 00:02:08.596 CC test/nvme/overhead/overhead.o 00:02:08.596 CC test/nvme/connect_stress/connect_stress.o 00:02:08.596 CC test/nvme/startup/startup.o 00:02:08.596 CC test/blobfs/mkfs/mkfs.o 00:02:08.596 CC test/nvme/boot_partition/boot_partition.o 00:02:08.596 LINK scheduler 00:02:08.596 CC test/nvme/fdp/fdp.o 00:02:08.596 CC test/accel/dif/dif.o 00:02:08.596 LINK hello_sock 00:02:08.596 LINK memory_ut 00:02:08.596 CC test/lvol/esnap/esnap.o 00:02:08.596 LINK vhost 00:02:08.596 LINK idxd_perf 00:02:08.596 LINK thread 00:02:08.854 LINK err_injection 00:02:08.854 LINK startup 00:02:08.854 LINK simple_copy 00:02:08.854 LINK boot_partition 00:02:08.854 LINK connect_stress 00:02:08.854 LINK reserve 00:02:08.854 LINK fused_ordering 00:02:08.854 LINK doorbell_aers 00:02:08.854 LINK mkfs 00:02:08.854 LINK sgl 00:02:08.854 LINK reset 00:02:08.854 LINK nvme_dp 00:02:08.854 LINK aer 00:02:08.854 LINK overhead 00:02:08.854 LINK nvme_compliance 00:02:08.854 LINK fdp 00:02:08.854 LINK dif 00:02:09.113 CC examples/nvme/hotplug/hotplug.o 00:02:09.113 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.113 CC examples/nvme/arbitration/arbitration.o 00:02:09.113 CC examples/nvme/hello_world/hello_world.o 00:02:09.113 CC examples/nvme/reconnect/reconnect.o 00:02:09.113 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:09.113 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:09.113 CC examples/nvme/abort/abort.o 00:02:09.113 CC examples/accel/perf/accel_perf.o 00:02:09.113 CC examples/blob/hello_world/hello_blob.o 00:02:09.113 CC examples/blob/cli/blobcli.o 00:02:09.113 LINK iscsi_fuzz 00:02:09.113 LINK hello_world 00:02:09.113 LINK pmr_persistence 00:02:09.113 LINK cmb_copy 00:02:09.113 LINK hotplug 00:02:09.370 LINK arbitration 00:02:09.370 LINK reconnect 00:02:09.370 LINK abort 00:02:09.370 LINK hello_blob 00:02:09.370 LINK nvme_manage 00:02:09.370 CC test/bdev/bdevio/bdevio.o 00:02:09.627 LINK accel_perf 00:02:09.627 LINK cuse 00:02:09.627 LINK blobcli 00:02:09.884 LINK bdevio 00:02:09.884 CC examples/bdev/bdevperf/bdevperf.o 00:02:09.884 CC examples/bdev/hello_world/hello_bdev.o 00:02:10.141 LINK hello_bdev 00:02:10.399 LINK bdevperf 00:02:10.964 CC examples/nvmf/nvmf/nvmf.o 00:02:11.221 LINK nvmf 00:02:12.153 LINK esnap 00:02:12.411 00:02:12.411 real 0m43.489s 00:02:12.411 user 6m29.556s 00:02:12.411 sys 3m23.632s 00:02:12.411 17:10:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.411 17:10:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.411 ************************************ 00:02:12.411 END TEST make 00:02:12.411 ************************************ 00:02:12.411 17:10:31 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.411 17:10:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.411 17:10:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.411 17:10:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.411 17:10:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.411 17:10:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.411 17:10:31 -- pm/common@44 -- $ pid=3777759 00:02:12.411 17:10:31 -- pm/common@50 -- $ kill -TERM 3777759 00:02:12.411 17:10:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.411 17:10:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.411 17:10:31 -- pm/common@44 -- $ pid=3777760 00:02:12.411 17:10:31 -- pm/common@50 -- $ kill -TERM 3777760 00:02:12.411 17:10:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.411 17:10:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.411 17:10:31 -- pm/common@44 -- $ pid=3777762 00:02:12.411 17:10:31 -- pm/common@50 -- $ kill -TERM 3777762 00:02:12.411 17:10:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.411 17:10:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.411 17:10:31 -- pm/common@44 -- $ pid=3777785 00:02:12.411 17:10:31 -- pm/common@50 -- $ sudo -E kill -TERM 3777785 00:02:12.411 17:10:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.411 17:10:31 -- nvmf/common.sh@7 -- # uname -s 00:02:12.411 17:10:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.411 17:10:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.411 17:10:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.411 17:10:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.411 17:10:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.411 17:10:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.411 17:10:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.411 17:10:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.411 17:10:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.411 17:10:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.411 17:10:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:12.411 17:10:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:12.411 17:10:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.411 17:10:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.411 17:10:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.411 17:10:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.411 17:10:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.411 17:10:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.411 17:10:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.411 17:10:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.411 17:10:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.411 17:10:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.412 17:10:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.412 17:10:31 -- paths/export.sh@5 -- # export PATH 00:02:12.412 17:10:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.412 17:10:31 -- nvmf/common.sh@47 -- # : 0 00:02:12.412 17:10:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.412 17:10:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.412 17:10:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.412 17:10:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.412 17:10:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.412 17:10:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.412 17:10:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.412 17:10:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.412 17:10:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.412 17:10:31 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.412 17:10:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.412 17:10:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.412 17:10:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.412 17:10:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.412 17:10:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.412 17:10:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.412 17:10:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.412 17:10:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.412 17:10:31 -- spdk/autotest.sh@48 -- # udevadm_pid=3836533 00:02:12.412 17:10:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.412 17:10:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.412 17:10:31 -- pm/common@17 -- # local monitor 00:02:12.412 17:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 17:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 17:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 17:10:31 -- pm/common@21 -- # date +%s 00:02:12.412 17:10:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.412 17:10:31 -- pm/common@21 -- # date +%s 00:02:12.412 17:10:31 -- pm/common@25 -- # sleep 1 00:02:12.412 17:10:31 -- pm/common@21 -- # date +%s 00:02:12.412 17:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720797031 00:02:12.412 17:10:31 -- pm/common@21 -- # date +%s 00:02:12.669 17:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720797031 00:02:12.669 17:10:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720797031 00:02:12.669 17:10:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720797031 00:02:12.669 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720797031_collect-cpu-load.pm.log 00:02:12.669 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720797031_collect-vmstat.pm.log 00:02:12.669 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720797031_collect-cpu-temp.pm.log 00:02:12.669 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720797031_collect-bmc-pm.bmc.pm.log 00:02:13.603 17:10:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.603 17:10:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.603 17:10:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:13.603 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:02:13.603 17:10:32 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.603 17:10:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:13.603 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:02:13.603 17:10:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:13.603 17:10:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.603 17:10:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.603 17:10:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.603 17:10:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.603 17:10:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.603 17:10:32 -- common/autotest_common.sh@1455 -- # uname 00:02:13.603 17:10:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:13.603 17:10:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.603 17:10:32 -- common/autotest_common.sh@1475 -- # uname 00:02:13.603 17:10:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:13.603 17:10:32 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.603 17:10:32 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.603 17:10:32 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.603 17:10:32 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.603 17:10:32 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:13.603 --rc lcov_branch_coverage=1 00:02:13.603 --rc lcov_function_coverage=1 00:02:13.603 --rc genhtml_branch_coverage=1 00:02:13.603 --rc genhtml_function_coverage=1 00:02:13.603 --rc genhtml_legend=1 00:02:13.603 --rc geninfo_all_blocks=1 00:02:13.603 ' 00:02:13.603 17:10:32 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:13.603 --rc lcov_branch_coverage=1 00:02:13.603 --rc lcov_function_coverage=1 00:02:13.603 --rc genhtml_branch_coverage=1 00:02:13.603 --rc genhtml_function_coverage=1 00:02:13.603 --rc genhtml_legend=1 00:02:13.603 --rc geninfo_all_blocks=1 00:02:13.603 ' 00:02:13.603 17:10:32 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:13.603 --rc lcov_branch_coverage=1 00:02:13.603 --rc lcov_function_coverage=1 00:02:13.603 --rc genhtml_branch_coverage=1 00:02:13.603 --rc genhtml_function_coverage=1 00:02:13.603 --rc genhtml_legend=1 00:02:13.603 --rc geninfo_all_blocks=1 00:02:13.603 --no-external' 00:02:13.603 17:10:32 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:13.603 --rc lcov_branch_coverage=1 00:02:13.603 --rc lcov_function_coverage=1 00:02:13.603 --rc genhtml_branch_coverage=1 00:02:13.603 --rc genhtml_function_coverage=1 00:02:13.603 --rc genhtml_legend=1 00:02:13.603 --rc geninfo_all_blocks=1 00:02:13.603 --no-external' 00:02:13.603 17:10:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.603 lcov: LCOV version 1.14 00:02:13.603 17:10:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:25.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:25.795 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:33.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:33.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:33.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:33.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:33.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:33.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:33.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:33.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:33.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:33.916 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:34.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:34.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:34.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:34.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:34.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:34.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:34.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:34.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:34.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:34.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:37.804 17:10:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:37.804 17:10:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:37.804 17:10:56 -- common/autotest_common.sh@10 -- # set +x 00:02:37.804 17:10:56 -- spdk/autotest.sh@91 -- # rm -f 00:02:37.804 17:10:56 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.331 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:40.331 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:40.331 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:40.589 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:40.589 17:10:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:40.589 17:10:59 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:40.589 17:10:59 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:40.589 17:10:59 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:40.589 17:10:59 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:40.589 17:10:59 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:40.589 17:10:59 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:40.589 17:10:59 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.589 17:10:59 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:40.589 17:10:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:40.589 17:10:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:40.589 17:10:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:40.589 17:10:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:40.589 17:10:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:40.589 17:10:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:40.847 No valid GPT data, bailing 00:02:40.847 17:10:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:40.847 17:10:59 -- scripts/common.sh@391 -- # pt= 00:02:40.847 17:10:59 -- scripts/common.sh@392 -- # return 1 00:02:40.847 17:10:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:40.847 1+0 records in 00:02:40.847 1+0 records out 00:02:40.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00139385 s, 752 MB/s 00:02:40.847 17:10:59 -- spdk/autotest.sh@118 -- # sync 00:02:40.847 17:10:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:40.847 17:10:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:40.847 17:10:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.114 17:11:04 -- spdk/autotest.sh@124 -- # uname -s 00:02:46.114 17:11:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:46.114 17:11:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.114 17:11:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.114 17:11:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.114 17:11:04 -- common/autotest_common.sh@10 -- # set +x 00:02:46.114 ************************************ 00:02:46.114 START TEST setup.sh 00:02:46.114 ************************************ 00:02:46.114 17:11:04 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.114 * Looking for test storage... 00:02:46.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.114 17:11:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:46.114 17:11:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:46.114 17:11:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.114 17:11:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.114 17:11:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.114 17:11:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:46.114 ************************************ 00:02:46.114 START TEST acl 00:02:46.114 ************************************ 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.114 * Looking for test storage... 00:02:46.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.114 17:11:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:46.114 17:11:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:46.114 17:11:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.114 17:11:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:49.401 17:11:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:49.401 17:11:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:49.401 17:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.401 17:11:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:49.401 17:11:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.401 17:11:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:51.931 Hugepages 00:02:51.931 node hugesize free / total 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 00:02:51.931 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:51.931 17:11:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.932 17:11:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:51.932 17:11:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:51.932 17:11:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.932 17:11:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.932 17:11:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:51.932 ************************************ 00:02:51.932 START TEST denied 00:02:51.932 ************************************ 00:02:51.932 17:11:10 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:51.932 17:11:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:51.932 17:11:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:51.932 17:11:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:51.932 17:11:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.932 17:11:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.461 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.461 17:11:13 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.644 00:02:58.644 real 0m6.590s 00:02:58.644 user 0m2.066s 00:02:58.644 sys 0m3.792s 00:02:58.644 17:11:16 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:58.644 17:11:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:58.644 ************************************ 00:02:58.644 END TEST denied 00:02:58.644 ************************************ 00:02:58.644 17:11:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:58.644 17:11:17 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:58.644 17:11:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.644 17:11:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.644 17:11:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.644 ************************************ 00:02:58.644 START TEST allowed 00:02:58.644 ************************************ 00:02:58.644 17:11:17 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:58.644 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:58.644 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:58.644 17:11:17 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:58.644 17:11:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.644 17:11:17 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.961 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.961 17:11:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:01.961 17:11:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:01.961 17:11:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:01.961 17:11:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.961 17:11:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.239 00:03:05.239 real 0m6.683s 00:03:05.239 user 0m2.029s 00:03:05.239 sys 0m3.748s 00:03:05.239 17:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.239 17:11:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:05.239 ************************************ 00:03:05.239 END TEST allowed 00:03:05.239 ************************************ 00:03:05.239 17:11:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:05.239 00:03:05.239 real 0m19.066s 00:03:05.239 user 0m6.229s 00:03:05.239 sys 0m11.307s 00:03:05.239 17:11:23 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.239 17:11:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:05.239 ************************************ 00:03:05.239 END TEST acl 00:03:05.239 ************************************ 00:03:05.239 17:11:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:05.239 17:11:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:05.239 17:11:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.239 17:11:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.239 17:11:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.239 ************************************ 00:03:05.239 START TEST hugepages 00:03:05.239 ************************************ 00:03:05.239 17:11:23 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:05.239 * Looking for test storage... 00:03:05.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 173545852 kB' 'MemAvailable: 176398724 kB' 'Buffers: 3896 kB' 'Cached: 10032848 kB' 'SwapCached: 0 kB' 'Active: 7021956 kB' 'Inactive: 3493732 kB' 'Active(anon): 6630256 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482220 kB' 'Mapped: 180072 kB' 'Shmem: 6151312 kB' 'KReclaimable: 223312 kB' 'Slab: 776180 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 552868 kB' 'KernelStack: 20320 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8126904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.239 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.240 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.241 17:11:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:05.241 17:11:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:05.241 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.241 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.241 17:11:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.499 ************************************ 00:03:05.499 START TEST default_setup 00:03:05.499 ************************************ 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.499 17:11:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.022 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.022 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.602 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175703640 kB' 'MemAvailable: 178556512 kB' 'Buffers: 3896 kB' 'Cached: 10032956 kB' 'SwapCached: 0 kB' 'Active: 7033748 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642048 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493516 kB' 'Mapped: 179572 kB' 'Shmem: 6151420 kB' 'KReclaimable: 223312 kB' 'Slab: 775264 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551952 kB' 'KernelStack: 20464 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8137912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314808 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.868 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175706724 kB' 'MemAvailable: 178559596 kB' 'Buffers: 3896 kB' 'Cached: 10032960 kB' 'SwapCached: 0 kB' 'Active: 7034412 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642712 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494240 kB' 'Mapped: 179548 kB' 'Shmem: 6151424 kB' 'KReclaimable: 223312 kB' 'Slab: 775288 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551976 kB' 'KernelStack: 20448 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8140548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314776 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.869 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.870 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175708636 kB' 'MemAvailable: 178561508 kB' 'Buffers: 3896 kB' 'Cached: 10032976 kB' 'SwapCached: 0 kB' 'Active: 7033724 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642024 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493884 kB' 'Mapped: 179492 kB' 'Shmem: 6151440 kB' 'KReclaimable: 223312 kB' 'Slab: 775260 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551948 kB' 'KernelStack: 20416 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8139312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314808 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.871 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.872 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.873 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.874 nr_hugepages=1024 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.874 resv_hugepages=0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.874 surplus_hugepages=0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.874 anon_hugepages=0 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175708888 kB' 'MemAvailable: 178561760 kB' 'Buffers: 3896 kB' 'Cached: 10033000 kB' 'SwapCached: 0 kB' 'Active: 7034092 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642392 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494232 kB' 'Mapped: 179492 kB' 'Shmem: 6151464 kB' 'KReclaimable: 223312 kB' 'Slab: 775260 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551948 kB' 'KernelStack: 20480 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8140592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.874 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.875 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92202348 kB' 'MemUsed: 5413280 kB' 'SwapCached: 0 kB' 'Active: 1642492 kB' 'Inactive: 236388 kB' 'Active(anon): 1459692 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1662612 kB' 'Mapped: 84476 kB' 'AnonPages: 219432 kB' 'Shmem: 1243424 kB' 'KernelStack: 11992 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340548 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.876 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.877 node0=1024 expecting 1024 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.877 00:03:08.877 real 0m3.572s 00:03:08.877 user 0m1.082s 00:03:08.877 sys 0m1.723s 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.877 17:11:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:08.877 ************************************ 00:03:08.877 END TEST default_setup 00:03:08.877 ************************************ 00:03:08.877 17:11:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.877 17:11:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:08.877 17:11:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.877 17:11:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.877 17:11:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.136 ************************************ 00:03:09.136 START TEST per_node_1G_alloc 00:03:09.136 ************************************ 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.136 17:11:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.677 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.677 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.677 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.677 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:11.677 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:11.677 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.677 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175744448 kB' 'MemAvailable: 178597320 kB' 'Buffers: 3896 kB' 'Cached: 10033088 kB' 'SwapCached: 0 kB' 'Active: 7036424 kB' 'Inactive: 3493732 kB' 'Active(anon): 6644724 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496580 kB' 'Mapped: 179508 kB' 'Shmem: 6151552 kB' 'KReclaimable: 223312 kB' 'Slab: 774748 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551436 kB' 'KernelStack: 20448 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8138428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.678 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175743740 kB' 'MemAvailable: 178596612 kB' 'Buffers: 3896 kB' 'Cached: 10033092 kB' 'SwapCached: 0 kB' 'Active: 7035676 kB' 'Inactive: 3493732 kB' 'Active(anon): 6643976 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496300 kB' 'Mapped: 179484 kB' 'Shmem: 6151556 kB' 'KReclaimable: 223312 kB' 'Slab: 774808 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551496 kB' 'KernelStack: 20448 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8138448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.679 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.680 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175744312 kB' 'MemAvailable: 178597184 kB' 'Buffers: 3896 kB' 'Cached: 10033108 kB' 'SwapCached: 0 kB' 'Active: 7036020 kB' 'Inactive: 3493732 kB' 'Active(anon): 6644320 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496676 kB' 'Mapped: 179484 kB' 'Shmem: 6151572 kB' 'KReclaimable: 223312 kB' 'Slab: 774800 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551488 kB' 'KernelStack: 20496 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8138104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.681 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.682 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.683 nr_hugepages=1024 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.683 resv_hugepages=0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.683 surplus_hugepages=0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.683 anon_hugepages=0 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175743348 kB' 'MemAvailable: 178596220 kB' 'Buffers: 3896 kB' 'Cached: 10033132 kB' 'SwapCached: 0 kB' 'Active: 7035928 kB' 'Inactive: 3493732 kB' 'Active(anon): 6644228 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496428 kB' 'Mapped: 179484 kB' 'Shmem: 6151596 kB' 'KReclaimable: 223312 kB' 'Slab: 774816 kB' 'SReclaimable: 223312 kB' 'SUnreclaim: 551504 kB' 'KernelStack: 20464 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8138492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.683 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.684 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93261612 kB' 'MemUsed: 4354016 kB' 'SwapCached: 0 kB' 'Active: 1643456 kB' 'Inactive: 236388 kB' 'Active(anon): 1460656 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1662752 kB' 'Mapped: 84452 kB' 'AnonPages: 220484 kB' 'Shmem: 1243564 kB' 'KernelStack: 11992 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340464 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.685 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.686 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82482228 kB' 'MemUsed: 11283300 kB' 'SwapCached: 0 kB' 'Active: 5393036 kB' 'Inactive: 3257344 kB' 'Active(anon): 5184136 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8374300 kB' 'Mapped: 95032 kB' 'AnonPages: 276488 kB' 'Shmem: 4908056 kB' 'KernelStack: 8440 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138540 kB' 'Slab: 434344 kB' 'SReclaimable: 138540 kB' 'SUnreclaim: 295804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.687 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.688 node0=512 expecting 512 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.688 node1=512 expecting 512 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.688 00:03:11.688 real 0m2.692s 00:03:11.688 user 0m1.067s 00:03:11.688 sys 0m1.677s 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.688 17:11:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.688 ************************************ 00:03:11.688 END TEST per_node_1G_alloc 00:03:11.688 ************************************ 00:03:11.688 17:11:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.688 17:11:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:11.688 17:11:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.688 17:11:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.688 17:11:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.688 ************************************ 00:03:11.688 START TEST even_2G_alloc 00:03:11.688 ************************************ 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.688 17:11:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.217 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.217 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.217 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175769740 kB' 'MemAvailable: 178622596 kB' 'Buffers: 3896 kB' 'Cached: 10033248 kB' 'SwapCached: 0 kB' 'Active: 7032016 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640316 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491932 kB' 'Mapped: 178508 kB' 'Shmem: 6151712 kB' 'KReclaimable: 223280 kB' 'Slab: 774320 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551040 kB' 'KernelStack: 20400 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8131796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.503 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.504 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175772156 kB' 'MemAvailable: 178625012 kB' 'Buffers: 3896 kB' 'Cached: 10033252 kB' 'SwapCached: 0 kB' 'Active: 7030912 kB' 'Inactive: 3493732 kB' 'Active(anon): 6639212 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490740 kB' 'Mapped: 178480 kB' 'Shmem: 6151716 kB' 'KReclaimable: 223280 kB' 'Slab: 774324 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551044 kB' 'KernelStack: 20432 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.505 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.506 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175774656 kB' 'MemAvailable: 178627512 kB' 'Buffers: 3896 kB' 'Cached: 10033268 kB' 'SwapCached: 0 kB' 'Active: 7031108 kB' 'Inactive: 3493732 kB' 'Active(anon): 6639408 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491496 kB' 'Mapped: 178480 kB' 'Shmem: 6151732 kB' 'KReclaimable: 223280 kB' 'Slab: 774392 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551112 kB' 'KernelStack: 20528 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8131832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.507 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.508 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.509 nr_hugepages=1024 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.509 resv_hugepages=0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.509 surplus_hugepages=0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.509 anon_hugepages=0 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175776808 kB' 'MemAvailable: 178629664 kB' 'Buffers: 3896 kB' 'Cached: 10033292 kB' 'SwapCached: 0 kB' 'Active: 7031692 kB' 'Inactive: 3493732 kB' 'Active(anon): 6639992 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491472 kB' 'Mapped: 178480 kB' 'Shmem: 6151756 kB' 'KReclaimable: 223280 kB' 'Slab: 774392 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551112 kB' 'KernelStack: 20608 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.509 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.510 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93265424 kB' 'MemUsed: 4350204 kB' 'SwapCached: 0 kB' 'Active: 1640000 kB' 'Inactive: 236388 kB' 'Active(anon): 1457200 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1662884 kB' 'Mapped: 83948 kB' 'AnonPages: 216640 kB' 'Shmem: 1243696 kB' 'KernelStack: 11928 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340004 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.511 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.512 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82509532 kB' 'MemUsed: 11255996 kB' 'SwapCached: 0 kB' 'Active: 5392012 kB' 'Inactive: 3257344 kB' 'Active(anon): 5183112 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8374320 kB' 'Mapped: 94532 kB' 'AnonPages: 275036 kB' 'Shmem: 4908076 kB' 'KernelStack: 8744 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138508 kB' 'Slab: 434388 kB' 'SReclaimable: 138508 kB' 'SUnreclaim: 295880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.513 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.770 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.771 node0=512 expecting 512 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:14.771 node1=512 expecting 512 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:14.771 00:03:14.771 real 0m2.840s 00:03:14.771 user 0m1.181s 00:03:14.771 sys 0m1.722s 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.771 17:11:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.771 ************************************ 00:03:14.771 END TEST even_2G_alloc 00:03:14.771 ************************************ 00:03:14.771 17:11:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.771 17:11:33 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:14.771 17:11:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.771 17:11:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.771 17:11:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.771 ************************************ 00:03:14.771 START TEST odd_alloc 00:03:14.771 ************************************ 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.771 17:11:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.304 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.304 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:17.304 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175769480 kB' 'MemAvailable: 178622336 kB' 'Buffers: 3896 kB' 'Cached: 10033392 kB' 'SwapCached: 0 kB' 'Active: 7032136 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640436 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491436 kB' 'Mapped: 178956 kB' 'Shmem: 6151856 kB' 'KReclaimable: 223280 kB' 'Slab: 774364 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551084 kB' 'KernelStack: 20432 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8130700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.304 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.305 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175775504 kB' 'MemAvailable: 178628360 kB' 'Buffers: 3896 kB' 'Cached: 10033396 kB' 'SwapCached: 0 kB' 'Active: 7032140 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640440 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491844 kB' 'Mapped: 178488 kB' 'Shmem: 6151860 kB' 'KReclaimable: 223280 kB' 'Slab: 774332 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551052 kB' 'KernelStack: 20432 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8132212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.306 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175775720 kB' 'MemAvailable: 178628576 kB' 'Buffers: 3896 kB' 'Cached: 10033412 kB' 'SwapCached: 0 kB' 'Active: 7031500 kB' 'Inactive: 3493732 kB' 'Active(anon): 6639800 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491200 kB' 'Mapped: 178488 kB' 'Shmem: 6151876 kB' 'KReclaimable: 223280 kB' 'Slab: 774332 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551052 kB' 'KernelStack: 20512 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8130740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.307 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.308 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:17.309 nr_hugepages=1025 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.309 resv_hugepages=0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.309 surplus_hugepages=0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.309 anon_hugepages=0 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175778180 kB' 'MemAvailable: 178631036 kB' 'Buffers: 3896 kB' 'Cached: 10033432 kB' 'SwapCached: 0 kB' 'Active: 7032116 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640416 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491792 kB' 'Mapped: 178488 kB' 'Shmem: 6151896 kB' 'KReclaimable: 223280 kB' 'Slab: 774332 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551052 kB' 'KernelStack: 20608 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8132252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.309 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.310 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93264928 kB' 'MemUsed: 4350700 kB' 'SwapCached: 0 kB' 'Active: 1640912 kB' 'Inactive: 236388 kB' 'Active(anon): 1458112 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1663032 kB' 'Mapped: 83956 kB' 'AnonPages: 217436 kB' 'Shmem: 1243844 kB' 'KernelStack: 11960 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340060 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.311 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 82513772 kB' 'MemUsed: 11251756 kB' 'SwapCached: 0 kB' 'Active: 5390992 kB' 'Inactive: 3257344 kB' 'Active(anon): 5182092 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8374320 kB' 'Mapped: 94532 kB' 'AnonPages: 274048 kB' 'Shmem: 4908076 kB' 'KernelStack: 8552 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138508 kB' 'Slab: 434272 kB' 'SReclaimable: 138508 kB' 'SUnreclaim: 295764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.312 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.313 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:17.314 node0=512 expecting 513 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:17.314 node1=513 expecting 512 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:17.314 00:03:17.314 real 0m2.685s 00:03:17.314 user 0m1.040s 00:03:17.314 sys 0m1.602s 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.314 17:11:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.314 ************************************ 00:03:17.314 END TEST odd_alloc 00:03:17.314 ************************************ 00:03:17.314 17:11:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.314 17:11:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:17.314 17:11:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.314 17:11:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.314 17:11:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.314 ************************************ 00:03:17.314 START TEST custom_alloc 00:03:17.314 ************************************ 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.314 17:11:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.849 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.849 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.849 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 174698108 kB' 'MemAvailable: 177550964 kB' 'Buffers: 3896 kB' 'Cached: 10033540 kB' 'SwapCached: 0 kB' 'Active: 7031904 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640204 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491004 kB' 'Mapped: 178488 kB' 'Shmem: 6152004 kB' 'KReclaimable: 223280 kB' 'Slab: 775124 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551844 kB' 'KernelStack: 20480 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8130116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.849 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.850 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 174699904 kB' 'MemAvailable: 177552760 kB' 'Buffers: 3896 kB' 'Cached: 10033544 kB' 'SwapCached: 0 kB' 'Active: 7032120 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640420 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491720 kB' 'Mapped: 178484 kB' 'Shmem: 6152008 kB' 'KReclaimable: 223280 kB' 'Slab: 775132 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551852 kB' 'KernelStack: 20480 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8132428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314888 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.851 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.852 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 174699932 kB' 'MemAvailable: 177552788 kB' 'Buffers: 3896 kB' 'Cached: 10033544 kB' 'SwapCached: 0 kB' 'Active: 7031944 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640244 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491532 kB' 'Mapped: 178544 kB' 'Shmem: 6152008 kB' 'KReclaimable: 223280 kB' 'Slab: 775120 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551840 kB' 'KernelStack: 20432 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8130156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314856 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.853 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.854 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.855 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:19.856 nr_hugepages=1536 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.856 resv_hugepages=0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.856 surplus_hugepages=0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.856 anon_hugepages=0 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 174701960 kB' 'MemAvailable: 177554816 kB' 'Buffers: 3896 kB' 'Cached: 10033584 kB' 'SwapCached: 0 kB' 'Active: 7032204 kB' 'Inactive: 3493732 kB' 'Active(anon): 6640504 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491244 kB' 'Mapped: 178484 kB' 'Shmem: 6152048 kB' 'KReclaimable: 223280 kB' 'Slab: 774992 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551712 kB' 'KernelStack: 20384 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8129808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.856 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.857 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.858 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 93240732 kB' 'MemUsed: 4374896 kB' 'SwapCached: 0 kB' 'Active: 1639816 kB' 'Inactive: 236388 kB' 'Active(anon): 1457016 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1663136 kB' 'Mapped: 83952 kB' 'AnonPages: 216248 kB' 'Shmem: 1243948 kB' 'KernelStack: 11880 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340940 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 256168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.859 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.860 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765528 kB' 'MemFree: 81463360 kB' 'MemUsed: 12302168 kB' 'SwapCached: 0 kB' 'Active: 5391756 kB' 'Inactive: 3257344 kB' 'Active(anon): 5182856 kB' 'Inactive(anon): 0 kB' 'Active(file): 208900 kB' 'Inactive(file): 3257344 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8374344 kB' 'Mapped: 94532 kB' 'AnonPages: 274824 kB' 'Shmem: 4908100 kB' 'KernelStack: 8472 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138508 kB' 'Slab: 434052 kB' 'SReclaimable: 138508 kB' 'SUnreclaim: 295544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.861 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.862 node0=512 expecting 512 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:19.862 node1=1024 expecting 1024 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:19.862 00:03:19.862 real 0m2.444s 00:03:19.862 user 0m0.900s 00:03:19.862 sys 0m1.495s 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.862 17:11:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.862 ************************************ 00:03:19.862 END TEST custom_alloc 00:03:19.862 ************************************ 00:03:19.862 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.862 17:11:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:19.862 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.862 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.862 17:11:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.862 ************************************ 00:03:19.862 START TEST no_shrink_alloc 00:03:19.862 ************************************ 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.862 17:11:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.394 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.394 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:22.394 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175718072 kB' 'MemAvailable: 178570928 kB' 'Buffers: 3896 kB' 'Cached: 10033692 kB' 'SwapCached: 0 kB' 'Active: 7034308 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642608 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493768 kB' 'Mapped: 178876 kB' 'Shmem: 6152156 kB' 'KReclaimable: 223280 kB' 'Slab: 774840 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551560 kB' 'KernelStack: 20448 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314920 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.657 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.658 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175722852 kB' 'MemAvailable: 178575708 kB' 'Buffers: 3896 kB' 'Cached: 10033692 kB' 'SwapCached: 0 kB' 'Active: 7034092 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642392 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493580 kB' 'Mapped: 178508 kB' 'Shmem: 6152156 kB' 'KReclaimable: 223280 kB' 'Slab: 774824 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551544 kB' 'KernelStack: 20416 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.659 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.660 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175723060 kB' 'MemAvailable: 178575916 kB' 'Buffers: 3896 kB' 'Cached: 10033708 kB' 'SwapCached: 0 kB' 'Active: 7033784 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642084 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493248 kB' 'Mapped: 178508 kB' 'Shmem: 6152172 kB' 'KReclaimable: 223280 kB' 'Slab: 774932 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551652 kB' 'KernelStack: 20416 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.661 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.662 nr_hugepages=1024 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.662 resv_hugepages=0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.662 surplus_hugepages=0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.662 anon_hugepages=0 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.662 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175723060 kB' 'MemAvailable: 178575916 kB' 'Buffers: 3896 kB' 'Cached: 10033728 kB' 'SwapCached: 0 kB' 'Active: 7033804 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642104 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493252 kB' 'Mapped: 178508 kB' 'Shmem: 6152192 kB' 'KReclaimable: 223280 kB' 'Slab: 774932 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551652 kB' 'KernelStack: 20416 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.663 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92193992 kB' 'MemUsed: 5421636 kB' 'SwapCached: 0 kB' 'Active: 1641044 kB' 'Inactive: 236388 kB' 'Active(anon): 1458244 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1663268 kB' 'Mapped: 83976 kB' 'AnonPages: 217328 kB' 'Shmem: 1244080 kB' 'KernelStack: 11944 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340576 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.664 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.665 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.666 node0=1024 expecting 1024 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.666 17:11:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.194 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.194 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:25.194 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:25.456 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.456 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175726028 kB' 'MemAvailable: 178578884 kB' 'Buffers: 3896 kB' 'Cached: 10033816 kB' 'SwapCached: 0 kB' 'Active: 7034780 kB' 'Inactive: 3493732 kB' 'Active(anon): 6643080 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493544 kB' 'Mapped: 178628 kB' 'Shmem: 6152280 kB' 'KReclaimable: 223280 kB' 'Slab: 775032 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551752 kB' 'KernelStack: 20400 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8131328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314872 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.457 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175726708 kB' 'MemAvailable: 178579564 kB' 'Buffers: 3896 kB' 'Cached: 10033820 kB' 'SwapCached: 0 kB' 'Active: 7034472 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642772 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493252 kB' 'Mapped: 178620 kB' 'Shmem: 6152284 kB' 'KReclaimable: 223280 kB' 'Slab: 775012 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551732 kB' 'KernelStack: 20416 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8130976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314840 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.458 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.459 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175727312 kB' 'MemAvailable: 178580168 kB' 'Buffers: 3896 kB' 'Cached: 10033840 kB' 'SwapCached: 0 kB' 'Active: 7033864 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642164 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493064 kB' 'Mapped: 178544 kB' 'Shmem: 6152304 kB' 'KReclaimable: 223280 kB' 'Slab: 774996 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551716 kB' 'KernelStack: 20368 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8131004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.460 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.461 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.462 nr_hugepages=1024 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.462 resv_hugepages=0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.462 surplus_hugepages=0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.462 anon_hugepages=0 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381156 kB' 'MemFree: 175726808 kB' 'MemAvailable: 178579664 kB' 'Buffers: 3896 kB' 'Cached: 10033860 kB' 'SwapCached: 0 kB' 'Active: 7033832 kB' 'Inactive: 3493732 kB' 'Active(anon): 6642132 kB' 'Inactive(anon): 0 kB' 'Active(file): 391700 kB' 'Inactive(file): 3493732 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493000 kB' 'Mapped: 178544 kB' 'Shmem: 6152324 kB' 'KReclaimable: 223280 kB' 'Slab: 774996 kB' 'SReclaimable: 223280 kB' 'SUnreclaim: 551716 kB' 'KernelStack: 20352 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8131028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314824 kB' 'VmallocChunk: 0 kB' 'Percpu: 69504 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2792404 kB' 'DirectMap2M: 16809984 kB' 'DirectMap1G: 182452224 kB' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.462 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.463 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92193416 kB' 'MemUsed: 5422212 kB' 'SwapCached: 0 kB' 'Active: 1640484 kB' 'Inactive: 236388 kB' 'Active(anon): 1457684 kB' 'Inactive(anon): 0 kB' 'Active(file): 182800 kB' 'Inactive(file): 236388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1663380 kB' 'Mapped: 84012 kB' 'AnonPages: 216612 kB' 'Shmem: 1244192 kB' 'KernelStack: 11912 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84772 kB' 'Slab: 340684 kB' 'SReclaimable: 84772 kB' 'SUnreclaim: 255912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.464 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:25.465 node0=1024 expecting 1024 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:25.465 00:03:25.465 real 0m5.603s 00:03:25.465 user 0m2.295s 00:03:25.465 sys 0m3.424s 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.465 17:11:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.465 ************************************ 00:03:25.465 END TEST no_shrink_alloc 00:03:25.465 ************************************ 00:03:25.465 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.465 17:11:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.465 00:03:25.465 real 0m20.352s 00:03:25.465 user 0m7.775s 00:03:25.465 sys 0m11.981s 00:03:25.465 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.465 17:11:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.465 ************************************ 00:03:25.465 END TEST hugepages 00:03:25.465 ************************************ 00:03:25.724 17:11:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:25.724 17:11:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:25.724 17:11:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.724 17:11:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.724 17:11:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.724 ************************************ 00:03:25.724 START TEST driver 00:03:25.724 ************************************ 00:03:25.724 17:11:44 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:25.724 * Looking for test storage... 00:03:25.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.724 17:11:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:25.724 17:11:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.724 17:11:44 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.967 17:11:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:29.967 17:11:48 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.967 17:11:48 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.967 17:11:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:29.967 ************************************ 00:03:29.967 START TEST guess_driver 00:03:29.967 ************************************ 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:29.967 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:29.967 Looking for driver=vfio-pci 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.967 17:11:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.502 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:32.503 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.439 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.439 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.439 17:11:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.439 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:33.439 17:11:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:33.439 17:11:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.439 17:11:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.630 00:03:37.630 real 0m7.550s 00:03:37.630 user 0m2.198s 00:03:37.630 sys 0m3.773s 00:03:37.630 17:11:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.630 17:11:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.630 ************************************ 00:03:37.630 END TEST guess_driver 00:03:37.630 ************************************ 00:03:37.630 17:11:55 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:37.630 00:03:37.630 real 0m11.554s 00:03:37.630 user 0m3.301s 00:03:37.630 sys 0m5.874s 00:03:37.630 17:11:55 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.630 17:11:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.630 ************************************ 00:03:37.630 END TEST driver 00:03:37.630 ************************************ 00:03:37.630 17:11:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.630 17:11:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.630 17:11:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.631 17:11:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.631 17:11:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.631 ************************************ 00:03:37.631 START TEST devices 00:03:37.631 ************************************ 00:03:37.631 17:11:55 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.631 * Looking for test storage... 00:03:37.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.631 17:11:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:37.631 17:11:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:37.631 17:11:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.631 17:11:55 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.163 17:11:58 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.163 17:11:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.163 17:11:58 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:40.163 17:11:58 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:40.163 17:11:58 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:40.164 17:11:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:40.164 17:11:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:40.164 17:11:58 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:40.423 No valid GPT data, bailing 00:03:40.423 17:11:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.423 17:11:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.423 17:11:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.423 17:11:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.423 17:11:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.423 17:11:58 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.423 17:11:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.423 17:11:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.423 17:11:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.423 17:11:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.423 ************************************ 00:03:40.423 START TEST nvme_mount 00:03:40.423 ************************************ 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.423 17:11:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.358 Creating new GPT entries in memory. 00:03:41.358 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.358 other utilities. 00:03:41.358 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.358 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.359 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.359 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.359 17:12:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:42.296 Creating new GPT entries in memory. 00:03:42.296 The operation has completed successfully. 00:03:42.296 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.296 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.296 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3867867 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:42.554 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.555 17:12:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.087 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.088 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.088 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:45.088 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.088 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.346 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.346 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.346 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.346 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.346 17:12:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.606 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:45.606 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:45.606 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.606 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.606 17:12:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.144 17:12:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.434 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:51.435 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:51.435 17:12:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:51.435 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:51.435 00:03:51.435 real 0m10.633s 00:03:51.435 user 0m3.164s 00:03:51.435 sys 0m5.310s 00:03:51.435 17:12:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.435 17:12:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:51.435 ************************************ 00:03:51.435 END TEST nvme_mount 00:03:51.435 ************************************ 00:03:51.435 17:12:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:51.435 17:12:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:51.435 17:12:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.435 17:12:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.435 17:12:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.435 ************************************ 00:03:51.435 START TEST dm_mount 00:03:51.435 ************************************ 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.435 17:12:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:52.001 Creating new GPT entries in memory. 00:03:52.001 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.001 other utilities. 00:03:52.001 17:12:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.001 17:12:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.001 17:12:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.001 17:12:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.001 17:12:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:53.379 Creating new GPT entries in memory. 00:03:53.379 The operation has completed successfully. 00:03:53.379 17:12:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.379 17:12:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.379 17:12:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.379 17:12:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.379 17:12:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:54.316 The operation has completed successfully. 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3872457 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.316 17:12:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.850 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.851 17:12:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.423 17:12:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:59.423 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:59.423 00:03:59.423 real 0m8.334s 00:03:59.423 user 0m1.933s 00:03:59.423 sys 0m3.387s 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.423 17:12:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.423 ************************************ 00:03:59.423 END TEST dm_mount 00:03:59.423 ************************************ 00:03:59.423 17:12:18 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.423 17:12:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.683 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:59.683 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:59.683 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.683 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.683 17:12:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:59.683 00:03:59.683 real 0m22.482s 00:03:59.683 user 0m6.307s 00:03:59.683 sys 0m10.856s 00:03:59.683 17:12:18 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.683 17:12:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.683 ************************************ 00:03:59.683 END TEST devices 00:03:59.683 ************************************ 00:03:59.683 17:12:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.683 00:03:59.683 real 1m13.818s 00:03:59.683 user 0m23.762s 00:03:59.683 sys 0m40.257s 00:03:59.683 17:12:18 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.683 17:12:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.683 ************************************ 00:03:59.683 END TEST setup.sh 00:03:59.683 ************************************ 00:03:59.683 17:12:18 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.683 17:12:18 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:02.998 Hugepages 00:04:02.998 node hugesize free / total 00:04:02.998 node0 1048576kB 0 / 0 00:04:02.998 node0 2048kB 2048 / 2048 00:04:02.998 node1 1048576kB 0 / 0 00:04:02.998 node1 2048kB 0 / 0 00:04:02.998 00:04:02.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.998 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:02.998 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:02.998 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:02.998 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:02.998 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:02.998 17:12:21 -- spdk/autotest.sh@130 -- # uname -s 00:04:02.998 17:12:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:02.998 17:12:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:02.998 17:12:21 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.527 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.527 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.094 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.094 17:12:24 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:07.471 17:12:25 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:07.471 17:12:25 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:07.471 17:12:25 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:07.471 17:12:25 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:07.471 17:12:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:07.471 17:12:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:07.471 17:12:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.471 17:12:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:07.471 17:12:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:07.471 17:12:25 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:07.471 17:12:25 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:07.471 17:12:25 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.002 Waiting for block devices as requested 00:04:10.002 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:10.002 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:10.002 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:10.002 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.002 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.002 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.261 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.261 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.261 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:10.519 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:10.519 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:10.519 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.519 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.778 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.778 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.778 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:11.036 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:11.036 17:12:29 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:11.036 17:12:29 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:11.036 17:12:29 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:11.036 17:12:29 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:11.036 17:12:29 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:11.036 17:12:29 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:11.036 17:12:29 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:11.036 17:12:29 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:11.036 17:12:29 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:11.036 17:12:29 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:11.036 17:12:29 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:11.036 17:12:29 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:11.036 17:12:29 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:11.036 17:12:29 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:11.036 17:12:29 -- common/autotest_common.sh@1557 -- # continue 00:04:11.036 17:12:29 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:11.036 17:12:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.036 17:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.036 17:12:29 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:11.036 17:12:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.036 17:12:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.036 17:12:29 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.571 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.571 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.830 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.764 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.764 17:12:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.764 17:12:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.764 17:12:33 -- common/autotest_common.sh@10 -- # set +x 00:04:14.764 17:12:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.764 17:12:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:14.764 17:12:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.764 17:12:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:14.764 17:12:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:14.764 17:12:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:14.764 17:12:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.764 17:12:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.764 17:12:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.764 17:12:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.764 17:12:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.764 17:12:33 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:14.764 17:12:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:14.764 17:12:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:14.764 17:12:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:14.764 17:12:33 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:14.764 17:12:33 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:14.764 17:12:33 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:14.764 17:12:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:14.764 17:12:33 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:14.764 17:12:33 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3881418 00:04:14.764 17:12:33 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.764 17:12:33 -- common/autotest_common.sh@1598 -- # waitforlisten 3881418 00:04:14.764 17:12:33 -- common/autotest_common.sh@829 -- # '[' -z 3881418 ']' 00:04:14.764 17:12:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.764 17:12:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.764 17:12:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.764 17:12:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.764 17:12:33 -- common/autotest_common.sh@10 -- # set +x 00:04:14.764 [2024-07-12 17:12:33.519910] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:14.764 [2024-07-12 17:12:33.519956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881418 ] 00:04:14.764 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.022 [2024-07-12 17:12:33.572409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.022 [2024-07-12 17:12:33.645742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.589 17:12:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.589 17:12:34 -- common/autotest_common.sh@862 -- # return 0 00:04:15.589 17:12:34 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:15.589 17:12:34 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:15.589 17:12:34 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:18.876 nvme0n1 00:04:18.876 17:12:37 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:18.876 [2024-07-12 17:12:37.471930] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:18.876 request: 00:04:18.876 { 00:04:18.876 "nvme_ctrlr_name": "nvme0", 00:04:18.876 "password": "test", 00:04:18.876 "method": "bdev_nvme_opal_revert", 00:04:18.876 "req_id": 1 00:04:18.876 } 00:04:18.876 Got JSON-RPC error response 00:04:18.876 response: 00:04:18.876 { 00:04:18.876 "code": -32602, 00:04:18.876 "message": "Invalid parameters" 00:04:18.876 } 00:04:18.876 17:12:37 -- common/autotest_common.sh@1604 -- # true 00:04:18.876 17:12:37 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:18.876 17:12:37 -- common/autotest_common.sh@1608 -- # killprocess 3881418 00:04:18.876 17:12:37 -- common/autotest_common.sh@948 -- # '[' -z 3881418 ']' 00:04:18.876 17:12:37 -- common/autotest_common.sh@952 -- # kill -0 3881418 00:04:18.877 17:12:37 -- common/autotest_common.sh@953 -- # uname 00:04:18.877 17:12:37 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.877 17:12:37 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3881418 00:04:18.877 17:12:37 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.877 17:12:37 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.877 17:12:37 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3881418' 00:04:18.877 killing process with pid 3881418 00:04:18.877 17:12:37 -- common/autotest_common.sh@967 -- # kill 3881418 00:04:18.877 17:12:37 -- common/autotest_common.sh@972 -- # wait 3881418 00:04:20.781 17:12:39 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:20.781 17:12:39 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:20.781 17:12:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.781 17:12:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.781 17:12:39 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:20.781 17:12:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:20.781 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:20.781 17:12:39 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:20.781 17:12:39 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.781 17:12:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.781 17:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.781 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:20.781 ************************************ 00:04:20.781 START TEST env 00:04:20.781 ************************************ 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.781 * Looking for test storage... 00:04:20.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.781 17:12:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.781 17:12:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.781 ************************************ 00:04:20.781 START TEST env_memory 00:04:20.781 ************************************ 00:04:20.781 17:12:39 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.781 00:04:20.781 00:04:20.781 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.781 http://cunit.sourceforge.net/ 00:04:20.781 00:04:20.781 00:04:20.781 Suite: memory 00:04:20.781 Test: alloc and free memory map ...[2024-07-12 17:12:39.357018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.781 passed 00:04:20.781 Test: mem map translation ...[2024-07-12 17:12:39.375886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.781 [2024-07-12 17:12:39.375899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.781 [2024-07-12 17:12:39.375950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.781 [2024-07-12 17:12:39.375957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.781 passed 00:04:20.781 Test: mem map registration ...[2024-07-12 17:12:39.412761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:20.781 [2024-07-12 17:12:39.412778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:20.781 passed 00:04:20.781 Test: mem map adjacent registrations ...passed 00:04:20.781 00:04:20.781 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.781 suites 1 1 n/a 0 0 00:04:20.781 tests 4 4 4 0 0 00:04:20.781 asserts 152 152 152 0 n/a 00:04:20.781 00:04:20.781 Elapsed time = 0.134 seconds 00:04:20.781 00:04:20.781 real 0m0.146s 00:04:20.781 user 0m0.136s 00:04:20.781 sys 0m0.009s 00:04:20.781 17:12:39 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.781 17:12:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.781 ************************************ 00:04:20.781 END TEST env_memory 00:04:20.781 ************************************ 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1142 -- # return 0 00:04:20.781 17:12:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.781 17:12:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.781 17:12:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.781 ************************************ 00:04:20.781 START TEST env_vtophys 00:04:20.781 ************************************ 00:04:20.781 17:12:39 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.781 EAL: lib.eal log level changed from notice to debug 00:04:20.781 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.781 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.781 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.781 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.781 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.781 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.781 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.781 EAL: Detected lcore 7 as core 8 on socket 0 00:04:20.781 EAL: Detected lcore 8 as core 9 on socket 0 00:04:20.781 EAL: Detected lcore 9 as core 10 on socket 0 00:04:20.781 EAL: Detected lcore 10 as core 11 on socket 0 00:04:20.781 EAL: Detected lcore 11 as core 12 on socket 0 00:04:20.781 EAL: Detected lcore 12 as core 13 on socket 0 00:04:20.781 EAL: Detected lcore 13 as core 16 on socket 0 00:04:20.781 EAL: Detected lcore 14 as core 17 on socket 0 00:04:20.781 EAL: Detected lcore 15 as core 18 on socket 0 00:04:20.781 EAL: Detected lcore 16 as core 19 on socket 0 00:04:20.781 EAL: Detected lcore 17 as core 20 on socket 0 00:04:20.781 EAL: Detected lcore 18 as core 21 on socket 0 00:04:20.781 EAL: Detected lcore 19 as core 25 on socket 0 00:04:20.781 EAL: Detected lcore 20 as core 26 on socket 0 00:04:20.781 EAL: Detected lcore 21 as core 27 on socket 0 00:04:20.781 EAL: Detected lcore 22 as core 28 on socket 0 00:04:20.781 EAL: Detected lcore 23 as core 29 on socket 0 00:04:20.781 EAL: Detected lcore 24 as core 0 on socket 1 00:04:20.781 EAL: Detected lcore 25 as core 1 on socket 1 00:04:20.781 EAL: Detected lcore 26 as core 2 on socket 1 00:04:20.781 EAL: Detected lcore 27 as core 3 on socket 1 00:04:20.781 EAL: Detected lcore 28 as core 4 on socket 1 00:04:20.781 EAL: Detected lcore 29 as core 5 on socket 1 00:04:20.781 EAL: Detected lcore 30 as core 6 on socket 1 00:04:20.781 EAL: Detected lcore 31 as core 9 on socket 1 00:04:20.781 EAL: Detected lcore 32 as core 10 on socket 1 00:04:20.781 EAL: Detected lcore 33 as core 11 on socket 1 00:04:20.781 EAL: Detected lcore 34 as core 12 on socket 1 00:04:20.781 EAL: Detected lcore 35 as core 13 on socket 1 00:04:20.781 EAL: Detected lcore 36 as core 16 on socket 1 00:04:20.781 EAL: Detected lcore 37 as core 17 on socket 1 00:04:20.781 EAL: Detected lcore 38 as core 18 on socket 1 00:04:20.781 EAL: Detected lcore 39 as core 19 on socket 1 00:04:20.781 EAL: Detected lcore 40 as core 20 on socket 1 00:04:20.781 EAL: Detected lcore 41 as core 21 on socket 1 00:04:20.781 EAL: Detected lcore 42 as core 24 on socket 1 00:04:20.781 EAL: Detected lcore 43 as core 25 on socket 1 00:04:20.781 EAL: Detected lcore 44 as core 26 on socket 1 00:04:20.781 EAL: Detected lcore 45 as core 27 on socket 1 00:04:20.781 EAL: Detected lcore 46 as core 28 on socket 1 00:04:20.781 EAL: Detected lcore 47 as core 29 on socket 1 00:04:20.781 EAL: Detected lcore 48 as core 0 on socket 0 00:04:20.781 EAL: Detected lcore 49 as core 1 on socket 0 00:04:20.781 EAL: Detected lcore 50 as core 2 on socket 0 00:04:20.781 EAL: Detected lcore 51 as core 3 on socket 0 00:04:20.781 EAL: Detected lcore 52 as core 4 on socket 0 00:04:20.781 EAL: Detected lcore 53 as core 5 on socket 0 00:04:20.781 EAL: Detected lcore 54 as core 6 on socket 0 00:04:20.781 EAL: Detected lcore 55 as core 8 on socket 0 00:04:20.781 EAL: Detected lcore 56 as core 9 on socket 0 00:04:20.781 EAL: Detected lcore 57 as core 10 on socket 0 00:04:20.781 EAL: Detected lcore 58 as core 11 on socket 0 00:04:20.781 EAL: Detected lcore 59 as core 12 on socket 0 00:04:20.781 EAL: Detected lcore 60 as core 13 on socket 0 00:04:20.781 EAL: Detected lcore 61 as core 16 on socket 0 00:04:20.781 EAL: Detected lcore 62 as core 17 on socket 0 00:04:20.781 EAL: Detected lcore 63 as core 18 on socket 0 00:04:20.781 EAL: Detected lcore 64 as core 19 on socket 0 00:04:20.781 EAL: Detected lcore 65 as core 20 on socket 0 00:04:20.781 EAL: Detected lcore 66 as core 21 on socket 0 00:04:20.781 EAL: Detected lcore 67 as core 25 on socket 0 00:04:20.781 EAL: Detected lcore 68 as core 26 on socket 0 00:04:20.781 EAL: Detected lcore 69 as core 27 on socket 0 00:04:20.781 EAL: Detected lcore 70 as core 28 on socket 0 00:04:20.781 EAL: Detected lcore 71 as core 29 on socket 0 00:04:20.781 EAL: Detected lcore 72 as core 0 on socket 1 00:04:20.781 EAL: Detected lcore 73 as core 1 on socket 1 00:04:20.781 EAL: Detected lcore 74 as core 2 on socket 1 00:04:20.781 EAL: Detected lcore 75 as core 3 on socket 1 00:04:20.781 EAL: Detected lcore 76 as core 4 on socket 1 00:04:20.781 EAL: Detected lcore 77 as core 5 on socket 1 00:04:20.781 EAL: Detected lcore 78 as core 6 on socket 1 00:04:20.781 EAL: Detected lcore 79 as core 9 on socket 1 00:04:20.781 EAL: Detected lcore 80 as core 10 on socket 1 00:04:20.781 EAL: Detected lcore 81 as core 11 on socket 1 00:04:20.781 EAL: Detected lcore 82 as core 12 on socket 1 00:04:20.781 EAL: Detected lcore 83 as core 13 on socket 1 00:04:20.781 EAL: Detected lcore 84 as core 16 on socket 1 00:04:20.781 EAL: Detected lcore 85 as core 17 on socket 1 00:04:20.781 EAL: Detected lcore 86 as core 18 on socket 1 00:04:20.781 EAL: Detected lcore 87 as core 19 on socket 1 00:04:20.781 EAL: Detected lcore 88 as core 20 on socket 1 00:04:20.781 EAL: Detected lcore 89 as core 21 on socket 1 00:04:20.781 EAL: Detected lcore 90 as core 24 on socket 1 00:04:20.781 EAL: Detected lcore 91 as core 25 on socket 1 00:04:20.781 EAL: Detected lcore 92 as core 26 on socket 1 00:04:20.781 EAL: Detected lcore 93 as core 27 on socket 1 00:04:20.781 EAL: Detected lcore 94 as core 28 on socket 1 00:04:20.781 EAL: Detected lcore 95 as core 29 on socket 1 00:04:20.781 EAL: Maximum logical cores by configuration: 128 00:04:20.781 EAL: Detected CPU lcores: 96 00:04:20.781 EAL: Detected NUMA nodes: 2 00:04:20.782 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.782 EAL: Detected shared linkage of DPDK 00:04:20.782 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.041 EAL: Bus pci wants IOVA as 'DC' 00:04:21.041 EAL: Buses did not request a specific IOVA mode. 00:04:21.041 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:21.041 EAL: Selected IOVA mode 'VA' 00:04:21.041 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.041 EAL: Probing VFIO support... 00:04:21.041 EAL: IOMMU type 1 (Type 1) is supported 00:04:21.041 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:21.041 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:21.041 EAL: VFIO support initialized 00:04:21.041 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.041 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.041 EAL: Setting up physically contiguous memory... 00:04:21.041 EAL: Setting maximum number of open files to 524288 00:04:21.041 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.041 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:21.041 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.041 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.041 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.041 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.041 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.041 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.041 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.041 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.042 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:21.042 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.042 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:21.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.042 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.042 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:21.042 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:21.042 EAL: Hugepages will be freed exactly as allocated. 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: TSC frequency is ~2300000 KHz 00:04:21.042 EAL: Main lcore 0 is ready (tid=7f4db69aaa00;cpuset=[0]) 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 0 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.042 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.042 00:04:21.042 00:04:21.042 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.042 http://cunit.sourceforge.net/ 00:04:21.042 00:04:21.042 00:04:21.042 Suite: components_suite 00:04:21.042 Test: vtophys_malloc_test ...passed 00:04:21.042 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.042 EAL: Trying to obtain current memory policy. 00:04:21.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.042 EAL: Restoring previous memory policy: 4 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.042 EAL: request: mp_malloc_sync 00:04:21.042 EAL: No shared files mode enabled, IPC is disabled 00:04:21.042 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.301 EAL: request: mp_malloc_sync 00:04:21.301 EAL: No shared files mode enabled, IPC is disabled 00:04:21.301 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.301 EAL: Trying to obtain current memory policy. 00:04:21.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.301 EAL: Restoring previous memory policy: 4 00:04:21.301 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.301 EAL: request: mp_malloc_sync 00:04:21.301 EAL: No shared files mode enabled, IPC is disabled 00:04:21.301 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.301 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.560 EAL: request: mp_malloc_sync 00:04:21.560 EAL: No shared files mode enabled, IPC is disabled 00:04:21.560 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.560 EAL: Trying to obtain current memory policy. 00:04:21.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.560 EAL: Restoring previous memory policy: 4 00:04:21.560 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.560 EAL: request: mp_malloc_sync 00:04:21.560 EAL: No shared files mode enabled, IPC is disabled 00:04:21.560 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.819 EAL: request: mp_malloc_sync 00:04:21.819 EAL: No shared files mode enabled, IPC is disabled 00:04:21.819 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.819 passed 00:04:21.819 00:04:21.819 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.819 suites 1 1 n/a 0 0 00:04:21.819 tests 2 2 2 0 0 00:04:21.819 asserts 497 497 497 0 n/a 00:04:21.819 00:04:21.819 Elapsed time = 0.962 seconds 00:04:21.819 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.819 EAL: request: mp_malloc_sync 00:04:21.819 EAL: No shared files mode enabled, IPC is disabled 00:04:21.819 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.819 EAL: No shared files mode enabled, IPC is disabled 00:04:21.819 EAL: No shared files mode enabled, IPC is disabled 00:04:21.819 EAL: No shared files mode enabled, IPC is disabled 00:04:22.078 00:04:22.078 real 0m1.067s 00:04:22.078 user 0m0.637s 00:04:22.078 sys 0m0.407s 00:04:22.078 17:12:40 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.078 17:12:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.078 ************************************ 00:04:22.078 END TEST env_vtophys 00:04:22.078 ************************************ 00:04:22.078 17:12:40 env -- common/autotest_common.sh@1142 -- # return 0 00:04:22.079 17:12:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.079 17:12:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.079 17:12:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.079 17:12:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.079 ************************************ 00:04:22.079 START TEST env_pci 00:04:22.079 ************************************ 00:04:22.079 17:12:40 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.079 00:04:22.079 00:04:22.079 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.079 http://cunit.sourceforge.net/ 00:04:22.079 00:04:22.079 00:04:22.079 Suite: pci 00:04:22.079 Test: pci_hook ...[2024-07-12 17:12:40.684247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3882777 has claimed it 00:04:22.079 EAL: Cannot find device (10000:00:01.0) 00:04:22.079 EAL: Failed to attach device on primary process 00:04:22.079 passed 00:04:22.079 00:04:22.079 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.079 suites 1 1 n/a 0 0 00:04:22.079 tests 1 1 1 0 0 00:04:22.079 asserts 25 25 25 0 n/a 00:04:22.079 00:04:22.079 Elapsed time = 0.025 seconds 00:04:22.079 00:04:22.079 real 0m0.043s 00:04:22.079 user 0m0.015s 00:04:22.079 sys 0m0.028s 00:04:22.079 17:12:40 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.079 17:12:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.079 ************************************ 00:04:22.079 END TEST env_pci 00:04:22.079 ************************************ 00:04:22.079 17:12:40 env -- common/autotest_common.sh@1142 -- # return 0 00:04:22.079 17:12:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.079 17:12:40 env -- env/env.sh@15 -- # uname 00:04:22.079 17:12:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.079 17:12:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.079 17:12:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.079 17:12:40 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:22.079 17:12:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.079 17:12:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.079 ************************************ 00:04:22.079 START TEST env_dpdk_post_init 00:04:22.079 ************************************ 00:04:22.079 17:12:40 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.079 EAL: Detected CPU lcores: 96 00:04:22.079 EAL: Detected NUMA nodes: 2 00:04:22.079 EAL: Detected shared linkage of DPDK 00:04:22.079 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.079 EAL: Selected IOVA mode 'VA' 00:04:22.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.079 EAL: VFIO support initialized 00:04:22.079 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.338 EAL: Using IOMMU type 1 (Type 1) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:22.338 EAL: Ignore mapping IO port bar(1) 00:04:22.338 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:23.275 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:23.275 EAL: Ignore mapping IO port bar(1) 00:04:23.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:26.637 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:26.637 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:26.637 Starting DPDK initialization... 00:04:26.637 Starting SPDK post initialization... 00:04:26.637 SPDK NVMe probe 00:04:26.637 Attaching to 0000:5e:00.0 00:04:26.637 Attached to 0000:5e:00.0 00:04:26.637 Cleaning up... 00:04:26.637 00:04:26.637 real 0m4.371s 00:04:26.637 user 0m3.328s 00:04:26.637 sys 0m0.117s 00:04:26.637 17:12:45 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.637 17:12:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 END TEST env_dpdk_post_init 00:04:26.637 ************************************ 00:04:26.637 17:12:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:26.637 17:12:45 env -- env/env.sh@26 -- # uname 00:04:26.637 17:12:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.637 17:12:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.637 17:12:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.637 17:12:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.637 17:12:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 START TEST env_mem_callbacks 00:04:26.637 ************************************ 00:04:26.637 17:12:45 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.637 EAL: Detected CPU lcores: 96 00:04:26.637 EAL: Detected NUMA nodes: 2 00:04:26.637 EAL: Detected shared linkage of DPDK 00:04:26.637 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.637 EAL: Selected IOVA mode 'VA' 00:04:26.637 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.637 EAL: VFIO support initialized 00:04:26.637 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.637 00:04:26.637 00:04:26.637 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.637 http://cunit.sourceforge.net/ 00:04:26.637 00:04:26.637 00:04:26.637 Suite: memory 00:04:26.637 Test: test ... 00:04:26.637 register 0x200000200000 2097152 00:04:26.637 malloc 3145728 00:04:26.637 register 0x200000400000 4194304 00:04:26.637 buf 0x200000500000 len 3145728 PASSED 00:04:26.637 malloc 64 00:04:26.637 buf 0x2000004fff40 len 64 PASSED 00:04:26.637 malloc 4194304 00:04:26.637 register 0x200000800000 6291456 00:04:26.637 buf 0x200000a00000 len 4194304 PASSED 00:04:26.637 free 0x200000500000 3145728 00:04:26.637 free 0x2000004fff40 64 00:04:26.637 unregister 0x200000400000 4194304 PASSED 00:04:26.637 free 0x200000a00000 4194304 00:04:26.637 unregister 0x200000800000 6291456 PASSED 00:04:26.637 malloc 8388608 00:04:26.637 register 0x200000400000 10485760 00:04:26.637 buf 0x200000600000 len 8388608 PASSED 00:04:26.637 free 0x200000600000 8388608 00:04:26.637 unregister 0x200000400000 10485760 PASSED 00:04:26.637 passed 00:04:26.637 00:04:26.637 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.637 suites 1 1 n/a 0 0 00:04:26.637 tests 1 1 1 0 0 00:04:26.637 asserts 15 15 15 0 n/a 00:04:26.637 00:04:26.637 Elapsed time = 0.005 seconds 00:04:26.637 00:04:26.637 real 0m0.057s 00:04:26.637 user 0m0.022s 00:04:26.637 sys 0m0.034s 00:04:26.637 17:12:45 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.637 17:12:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 END TEST env_mem_callbacks 00:04:26.637 ************************************ 00:04:26.637 17:12:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:26.637 00:04:26.637 real 0m6.129s 00:04:26.637 user 0m4.324s 00:04:26.637 sys 0m0.885s 00:04:26.637 17:12:45 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.637 17:12:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 END TEST env 00:04:26.637 ************************************ 00:04:26.637 17:12:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.637 17:12:45 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.637 17:12:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.637 17:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.637 17:12:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 START TEST rpc 00:04:26.637 ************************************ 00:04:26.637 17:12:45 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.896 * Looking for test storage... 00:04:26.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.896 17:12:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3883599 00:04:26.896 17:12:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.896 17:12:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.896 17:12:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3883599 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@829 -- # '[' -z 3883599 ']' 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.896 17:12:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.896 [2024-07-12 17:12:45.517786] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:26.896 [2024-07-12 17:12:45.517837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883599 ] 00:04:26.896 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.896 [2024-07-12 17:12:45.571479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.896 [2024-07-12 17:12:45.644963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.896 [2024-07-12 17:12:45.645003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3883599' to capture a snapshot of events at runtime. 00:04:26.896 [2024-07-12 17:12:45.645010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.896 [2024-07-12 17:12:45.645015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.896 [2024-07-12 17:12:45.645020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3883599 for offline analysis/debug. 00:04:26.896 [2024-07-12 17:12:45.645040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.832 17:12:46 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.832 17:12:46 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:27.832 17:12:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.832 17:12:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.832 17:12:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.832 17:12:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.832 17:12:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.832 17:12:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.832 17:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 ************************************ 00:04:27.832 START TEST rpc_integrity 00:04:27.832 ************************************ 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.832 { 00:04:27.832 "name": "Malloc0", 00:04:27.832 "aliases": [ 00:04:27.832 "93de12b0-f7db-4f95-8c57-b3511cba54c7" 00:04:27.832 ], 00:04:27.832 "product_name": "Malloc disk", 00:04:27.832 "block_size": 512, 00:04:27.832 "num_blocks": 16384, 00:04:27.832 "uuid": "93de12b0-f7db-4f95-8c57-b3511cba54c7", 00:04:27.832 "assigned_rate_limits": { 00:04:27.832 "rw_ios_per_sec": 0, 00:04:27.832 "rw_mbytes_per_sec": 0, 00:04:27.832 "r_mbytes_per_sec": 0, 00:04:27.832 "w_mbytes_per_sec": 0 00:04:27.832 }, 00:04:27.832 "claimed": false, 00:04:27.832 "zoned": false, 00:04:27.832 "supported_io_types": { 00:04:27.832 "read": true, 00:04:27.832 "write": true, 00:04:27.832 "unmap": true, 00:04:27.832 "flush": true, 00:04:27.832 "reset": true, 00:04:27.832 "nvme_admin": false, 00:04:27.832 "nvme_io": false, 00:04:27.832 "nvme_io_md": false, 00:04:27.832 "write_zeroes": true, 00:04:27.832 "zcopy": true, 00:04:27.832 "get_zone_info": false, 00:04:27.832 "zone_management": false, 00:04:27.832 "zone_append": false, 00:04:27.832 "compare": false, 00:04:27.832 "compare_and_write": false, 00:04:27.832 "abort": true, 00:04:27.832 "seek_hole": false, 00:04:27.832 "seek_data": false, 00:04:27.832 "copy": true, 00:04:27.832 "nvme_iov_md": false 00:04:27.832 }, 00:04:27.832 "memory_domains": [ 00:04:27.832 { 00:04:27.832 "dma_device_id": "system", 00:04:27.832 "dma_device_type": 1 00:04:27.832 }, 00:04:27.832 { 00:04:27.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.832 "dma_device_type": 2 00:04:27.832 } 00:04:27.832 ], 00:04:27.832 "driver_specific": {} 00:04:27.832 } 00:04:27.832 ]' 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.832 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.832 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.833 [2024-07-12 17:12:46.480646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.833 [2024-07-12 17:12:46.480676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.833 [2024-07-12 17:12:46.480688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eca2d0 00:04:27.833 [2024-07-12 17:12:46.480695] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.833 [2024-07-12 17:12:46.481731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.833 [2024-07-12 17:12:46.481751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.833 Passthru0 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.833 { 00:04:27.833 "name": "Malloc0", 00:04:27.833 "aliases": [ 00:04:27.833 "93de12b0-f7db-4f95-8c57-b3511cba54c7" 00:04:27.833 ], 00:04:27.833 "product_name": "Malloc disk", 00:04:27.833 "block_size": 512, 00:04:27.833 "num_blocks": 16384, 00:04:27.833 "uuid": "93de12b0-f7db-4f95-8c57-b3511cba54c7", 00:04:27.833 "assigned_rate_limits": { 00:04:27.833 "rw_ios_per_sec": 0, 00:04:27.833 "rw_mbytes_per_sec": 0, 00:04:27.833 "r_mbytes_per_sec": 0, 00:04:27.833 "w_mbytes_per_sec": 0 00:04:27.833 }, 00:04:27.833 "claimed": true, 00:04:27.833 "claim_type": "exclusive_write", 00:04:27.833 "zoned": false, 00:04:27.833 "supported_io_types": { 00:04:27.833 "read": true, 00:04:27.833 "write": true, 00:04:27.833 "unmap": true, 00:04:27.833 "flush": true, 00:04:27.833 "reset": true, 00:04:27.833 "nvme_admin": false, 00:04:27.833 "nvme_io": false, 00:04:27.833 "nvme_io_md": false, 00:04:27.833 "write_zeroes": true, 00:04:27.833 "zcopy": true, 00:04:27.833 "get_zone_info": false, 00:04:27.833 "zone_management": false, 00:04:27.833 "zone_append": false, 00:04:27.833 "compare": false, 00:04:27.833 "compare_and_write": false, 00:04:27.833 "abort": true, 00:04:27.833 "seek_hole": false, 00:04:27.833 "seek_data": false, 00:04:27.833 "copy": true, 00:04:27.833 "nvme_iov_md": false 00:04:27.833 }, 00:04:27.833 "memory_domains": [ 00:04:27.833 { 00:04:27.833 "dma_device_id": "system", 00:04:27.833 "dma_device_type": 1 00:04:27.833 }, 00:04:27.833 { 00:04:27.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.833 "dma_device_type": 2 00:04:27.833 } 00:04:27.833 ], 00:04:27.833 "driver_specific": {} 00:04:27.833 }, 00:04:27.833 { 00:04:27.833 "name": "Passthru0", 00:04:27.833 "aliases": [ 00:04:27.833 "a57d2c96-5e45-5ee2-a584-b4469517acd4" 00:04:27.833 ], 00:04:27.833 "product_name": "passthru", 00:04:27.833 "block_size": 512, 00:04:27.833 "num_blocks": 16384, 00:04:27.833 "uuid": "a57d2c96-5e45-5ee2-a584-b4469517acd4", 00:04:27.833 "assigned_rate_limits": { 00:04:27.833 "rw_ios_per_sec": 0, 00:04:27.833 "rw_mbytes_per_sec": 0, 00:04:27.833 "r_mbytes_per_sec": 0, 00:04:27.833 "w_mbytes_per_sec": 0 00:04:27.833 }, 00:04:27.833 "claimed": false, 00:04:27.833 "zoned": false, 00:04:27.833 "supported_io_types": { 00:04:27.833 "read": true, 00:04:27.833 "write": true, 00:04:27.833 "unmap": true, 00:04:27.833 "flush": true, 00:04:27.833 "reset": true, 00:04:27.833 "nvme_admin": false, 00:04:27.833 "nvme_io": false, 00:04:27.833 "nvme_io_md": false, 00:04:27.833 "write_zeroes": true, 00:04:27.833 "zcopy": true, 00:04:27.833 "get_zone_info": false, 00:04:27.833 "zone_management": false, 00:04:27.833 "zone_append": false, 00:04:27.833 "compare": false, 00:04:27.833 "compare_and_write": false, 00:04:27.833 "abort": true, 00:04:27.833 "seek_hole": false, 00:04:27.833 "seek_data": false, 00:04:27.833 "copy": true, 00:04:27.833 "nvme_iov_md": false 00:04:27.833 }, 00:04:27.833 "memory_domains": [ 00:04:27.833 { 00:04:27.833 "dma_device_id": "system", 00:04:27.833 "dma_device_type": 1 00:04:27.833 }, 00:04:27.833 { 00:04:27.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.833 "dma_device_type": 2 00:04:27.833 } 00:04:27.833 ], 00:04:27.833 "driver_specific": { 00:04:27.833 "passthru": { 00:04:27.833 "name": "Passthru0", 00:04:27.833 "base_bdev_name": "Malloc0" 00:04:27.833 } 00:04:27.833 } 00:04:27.833 } 00:04:27.833 ]' 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.833 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.833 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.091 17:12:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.091 00:04:28.091 real 0m0.282s 00:04:28.091 user 0m0.182s 00:04:28.091 sys 0m0.030s 00:04:28.091 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 ************************************ 00:04:28.091 END TEST rpc_integrity 00:04:28.091 ************************************ 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.091 17:12:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 ************************************ 00:04:28.091 START TEST rpc_plugins 00:04:28.091 ************************************ 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.091 { 00:04:28.091 "name": "Malloc1", 00:04:28.091 "aliases": [ 00:04:28.091 "481fc7fe-7710-44ff-96d9-6153bc30209e" 00:04:28.091 ], 00:04:28.091 "product_name": "Malloc disk", 00:04:28.091 "block_size": 4096, 00:04:28.091 "num_blocks": 256, 00:04:28.091 "uuid": "481fc7fe-7710-44ff-96d9-6153bc30209e", 00:04:28.091 "assigned_rate_limits": { 00:04:28.091 "rw_ios_per_sec": 0, 00:04:28.091 "rw_mbytes_per_sec": 0, 00:04:28.091 "r_mbytes_per_sec": 0, 00:04:28.091 "w_mbytes_per_sec": 0 00:04:28.091 }, 00:04:28.091 "claimed": false, 00:04:28.091 "zoned": false, 00:04:28.091 "supported_io_types": { 00:04:28.091 "read": true, 00:04:28.091 "write": true, 00:04:28.091 "unmap": true, 00:04:28.091 "flush": true, 00:04:28.091 "reset": true, 00:04:28.091 "nvme_admin": false, 00:04:28.091 "nvme_io": false, 00:04:28.091 "nvme_io_md": false, 00:04:28.091 "write_zeroes": true, 00:04:28.091 "zcopy": true, 00:04:28.091 "get_zone_info": false, 00:04:28.091 "zone_management": false, 00:04:28.091 "zone_append": false, 00:04:28.091 "compare": false, 00:04:28.091 "compare_and_write": false, 00:04:28.091 "abort": true, 00:04:28.091 "seek_hole": false, 00:04:28.091 "seek_data": false, 00:04:28.091 "copy": true, 00:04:28.091 "nvme_iov_md": false 00:04:28.091 }, 00:04:28.091 "memory_domains": [ 00:04:28.091 { 00:04:28.091 "dma_device_id": "system", 00:04:28.091 "dma_device_type": 1 00:04:28.091 }, 00:04:28.091 { 00:04:28.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.091 "dma_device_type": 2 00:04:28.091 } 00:04:28.091 ], 00:04:28.091 "driver_specific": {} 00:04:28.091 } 00:04:28.091 ]' 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:28.091 17:12:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.091 00:04:28.091 real 0m0.136s 00:04:28.091 user 0m0.084s 00:04:28.091 sys 0m0.016s 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.091 17:12:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.091 ************************************ 00:04:28.091 END TEST rpc_plugins 00:04:28.091 ************************************ 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.091 17:12:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.091 17:12:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.350 ************************************ 00:04:28.350 START TEST rpc_trace_cmd_test 00:04:28.350 ************************************ 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:28.350 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3883599", 00:04:28.350 "tpoint_group_mask": "0x8", 00:04:28.350 "iscsi_conn": { 00:04:28.350 "mask": "0x2", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "scsi": { 00:04:28.350 "mask": "0x4", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "bdev": { 00:04:28.350 "mask": "0x8", 00:04:28.350 "tpoint_mask": "0xffffffffffffffff" 00:04:28.350 }, 00:04:28.350 "nvmf_rdma": { 00:04:28.350 "mask": "0x10", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "nvmf_tcp": { 00:04:28.350 "mask": "0x20", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "ftl": { 00:04:28.350 "mask": "0x40", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "blobfs": { 00:04:28.350 "mask": "0x80", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "dsa": { 00:04:28.350 "mask": "0x200", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "thread": { 00:04:28.350 "mask": "0x400", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "nvme_pcie": { 00:04:28.350 "mask": "0x800", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "iaa": { 00:04:28.350 "mask": "0x1000", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "nvme_tcp": { 00:04:28.350 "mask": "0x2000", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "bdev_nvme": { 00:04:28.350 "mask": "0x4000", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 }, 00:04:28.350 "sock": { 00:04:28.350 "mask": "0x8000", 00:04:28.350 "tpoint_mask": "0x0" 00:04:28.350 } 00:04:28.350 }' 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:28.350 17:12:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.351 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.609 17:12:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.609 00:04:28.609 real 0m0.231s 00:04:28.609 user 0m0.199s 00:04:28.609 sys 0m0.023s 00:04:28.609 17:12:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.609 17:12:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:28.609 ************************************ 00:04:28.609 END TEST rpc_trace_cmd_test 00:04:28.609 ************************************ 00:04:28.609 17:12:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.609 17:12:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.609 17:12:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.609 17:12:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.609 17:12:47 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.609 17:12:47 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.609 17:12:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.609 ************************************ 00:04:28.609 START TEST rpc_daemon_integrity 00:04:28.609 ************************************ 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.609 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.609 { 00:04:28.609 "name": "Malloc2", 00:04:28.609 "aliases": [ 00:04:28.609 "200074e5-b3ec-416a-b073-06e223848926" 00:04:28.609 ], 00:04:28.609 "product_name": "Malloc disk", 00:04:28.609 "block_size": 512, 00:04:28.609 "num_blocks": 16384, 00:04:28.609 "uuid": "200074e5-b3ec-416a-b073-06e223848926", 00:04:28.609 "assigned_rate_limits": { 00:04:28.609 "rw_ios_per_sec": 0, 00:04:28.609 "rw_mbytes_per_sec": 0, 00:04:28.609 "r_mbytes_per_sec": 0, 00:04:28.609 "w_mbytes_per_sec": 0 00:04:28.609 }, 00:04:28.609 "claimed": false, 00:04:28.609 "zoned": false, 00:04:28.609 "supported_io_types": { 00:04:28.609 "read": true, 00:04:28.609 "write": true, 00:04:28.609 "unmap": true, 00:04:28.609 "flush": true, 00:04:28.609 "reset": true, 00:04:28.609 "nvme_admin": false, 00:04:28.609 "nvme_io": false, 00:04:28.609 "nvme_io_md": false, 00:04:28.609 "write_zeroes": true, 00:04:28.609 "zcopy": true, 00:04:28.609 "get_zone_info": false, 00:04:28.609 "zone_management": false, 00:04:28.609 "zone_append": false, 00:04:28.609 "compare": false, 00:04:28.609 "compare_and_write": false, 00:04:28.609 "abort": true, 00:04:28.609 "seek_hole": false, 00:04:28.609 "seek_data": false, 00:04:28.609 "copy": true, 00:04:28.609 "nvme_iov_md": false 00:04:28.609 }, 00:04:28.609 "memory_domains": [ 00:04:28.609 { 00:04:28.610 "dma_device_id": "system", 00:04:28.610 "dma_device_type": 1 00:04:28.610 }, 00:04:28.610 { 00:04:28.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.610 "dma_device_type": 2 00:04:28.610 } 00:04:28.610 ], 00:04:28.610 "driver_specific": {} 00:04:28.610 } 00:04:28.610 ]' 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.610 [2024-07-12 17:12:47.326962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:28.610 [2024-07-12 17:12:47.326991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.610 [2024-07-12 17:12:47.327003] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2061ac0 00:04:28.610 [2024-07-12 17:12:47.327009] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.610 [2024-07-12 17:12:47.327963] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.610 [2024-07-12 17:12:47.327984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.610 Passthru0 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.610 { 00:04:28.610 "name": "Malloc2", 00:04:28.610 "aliases": [ 00:04:28.610 "200074e5-b3ec-416a-b073-06e223848926" 00:04:28.610 ], 00:04:28.610 "product_name": "Malloc disk", 00:04:28.610 "block_size": 512, 00:04:28.610 "num_blocks": 16384, 00:04:28.610 "uuid": "200074e5-b3ec-416a-b073-06e223848926", 00:04:28.610 "assigned_rate_limits": { 00:04:28.610 "rw_ios_per_sec": 0, 00:04:28.610 "rw_mbytes_per_sec": 0, 00:04:28.610 "r_mbytes_per_sec": 0, 00:04:28.610 "w_mbytes_per_sec": 0 00:04:28.610 }, 00:04:28.610 "claimed": true, 00:04:28.610 "claim_type": "exclusive_write", 00:04:28.610 "zoned": false, 00:04:28.610 "supported_io_types": { 00:04:28.610 "read": true, 00:04:28.610 "write": true, 00:04:28.610 "unmap": true, 00:04:28.610 "flush": true, 00:04:28.610 "reset": true, 00:04:28.610 "nvme_admin": false, 00:04:28.610 "nvme_io": false, 00:04:28.610 "nvme_io_md": false, 00:04:28.610 "write_zeroes": true, 00:04:28.610 "zcopy": true, 00:04:28.610 "get_zone_info": false, 00:04:28.610 "zone_management": false, 00:04:28.610 "zone_append": false, 00:04:28.610 "compare": false, 00:04:28.610 "compare_and_write": false, 00:04:28.610 "abort": true, 00:04:28.610 "seek_hole": false, 00:04:28.610 "seek_data": false, 00:04:28.610 "copy": true, 00:04:28.610 "nvme_iov_md": false 00:04:28.610 }, 00:04:28.610 "memory_domains": [ 00:04:28.610 { 00:04:28.610 "dma_device_id": "system", 00:04:28.610 "dma_device_type": 1 00:04:28.610 }, 00:04:28.610 { 00:04:28.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.610 "dma_device_type": 2 00:04:28.610 } 00:04:28.610 ], 00:04:28.610 "driver_specific": {} 00:04:28.610 }, 00:04:28.610 { 00:04:28.610 "name": "Passthru0", 00:04:28.610 "aliases": [ 00:04:28.610 "10c8f39c-4455-5bd1-9f41-2fb117b50487" 00:04:28.610 ], 00:04:28.610 "product_name": "passthru", 00:04:28.610 "block_size": 512, 00:04:28.610 "num_blocks": 16384, 00:04:28.610 "uuid": "10c8f39c-4455-5bd1-9f41-2fb117b50487", 00:04:28.610 "assigned_rate_limits": { 00:04:28.610 "rw_ios_per_sec": 0, 00:04:28.610 "rw_mbytes_per_sec": 0, 00:04:28.610 "r_mbytes_per_sec": 0, 00:04:28.610 "w_mbytes_per_sec": 0 00:04:28.610 }, 00:04:28.610 "claimed": false, 00:04:28.610 "zoned": false, 00:04:28.610 "supported_io_types": { 00:04:28.610 "read": true, 00:04:28.610 "write": true, 00:04:28.610 "unmap": true, 00:04:28.610 "flush": true, 00:04:28.610 "reset": true, 00:04:28.610 "nvme_admin": false, 00:04:28.610 "nvme_io": false, 00:04:28.610 "nvme_io_md": false, 00:04:28.610 "write_zeroes": true, 00:04:28.610 "zcopy": true, 00:04:28.610 "get_zone_info": false, 00:04:28.610 "zone_management": false, 00:04:28.610 "zone_append": false, 00:04:28.610 "compare": false, 00:04:28.610 "compare_and_write": false, 00:04:28.610 "abort": true, 00:04:28.610 "seek_hole": false, 00:04:28.610 "seek_data": false, 00:04:28.610 "copy": true, 00:04:28.610 "nvme_iov_md": false 00:04:28.610 }, 00:04:28.610 "memory_domains": [ 00:04:28.610 { 00:04:28.610 "dma_device_id": "system", 00:04:28.610 "dma_device_type": 1 00:04:28.610 }, 00:04:28.610 { 00:04:28.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.610 "dma_device_type": 2 00:04:28.610 } 00:04:28.610 ], 00:04:28.610 "driver_specific": { 00:04:28.610 "passthru": { 00:04:28.610 "name": "Passthru0", 00:04:28.610 "base_bdev_name": "Malloc2" 00:04:28.610 } 00:04:28.610 } 00:04:28.610 } 00:04:28.610 ]' 00:04:28.610 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.868 00:04:28.868 real 0m0.272s 00:04:28.868 user 0m0.186s 00:04:28.868 sys 0m0.025s 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.868 17:12:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.868 ************************************ 00:04:28.869 END TEST rpc_daemon_integrity 00:04:28.869 ************************************ 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.869 17:12:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:28.869 17:12:47 rpc -- rpc/rpc.sh@84 -- # killprocess 3883599 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@948 -- # '[' -z 3883599 ']' 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@952 -- # kill -0 3883599 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@953 -- # uname 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883599 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883599' 00:04:28.869 killing process with pid 3883599 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@967 -- # kill 3883599 00:04:28.869 17:12:47 rpc -- common/autotest_common.sh@972 -- # wait 3883599 00:04:29.127 00:04:29.127 real 0m2.473s 00:04:29.127 user 0m3.214s 00:04:29.127 sys 0m0.654s 00:04:29.127 17:12:47 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.127 17:12:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.127 ************************************ 00:04:29.127 END TEST rpc 00:04:29.127 ************************************ 00:04:29.127 17:12:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.127 17:12:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.127 17:12:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.127 17:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.127 17:12:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.385 ************************************ 00:04:29.385 START TEST skip_rpc 00:04:29.385 ************************************ 00:04:29.385 17:12:47 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:29.385 * Looking for test storage... 00:04:29.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.385 17:12:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.385 17:12:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.385 17:12:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:29.385 17:12:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.385 17:12:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.385 17:12:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.385 ************************************ 00:04:29.385 START TEST skip_rpc 00:04:29.385 ************************************ 00:04:29.385 17:12:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:29.385 17:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3884234 00:04:29.385 17:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.385 17:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:29.385 17:12:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:29.385 [2024-07-12 17:12:48.101054] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:29.385 [2024-07-12 17:12:48.101094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884234 ] 00:04:29.385 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.385 [2024-07-12 17:12:48.154134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.644 [2024-07-12 17:12:48.228053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3884234 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3884234 ']' 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3884234 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884234 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884234' 00:04:34.914 killing process with pid 3884234 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3884234 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3884234 00:04:34.914 00:04:34.914 real 0m5.362s 00:04:34.914 user 0m5.127s 00:04:34.914 sys 0m0.266s 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.914 17:12:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.914 ************************************ 00:04:34.914 END TEST skip_rpc 00:04:34.914 ************************************ 00:04:34.914 17:12:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.914 17:12:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.914 17:12:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.914 17:12:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.914 17:12:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.914 ************************************ 00:04:34.914 START TEST skip_rpc_with_json 00:04:34.914 ************************************ 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3885180 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3885180 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3885180 ']' 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.914 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.915 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.915 17:12:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.915 [2024-07-12 17:12:53.529067] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:34.915 [2024-07-12 17:12:53.529106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885180 ] 00:04:34.915 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.915 [2024-07-12 17:12:53.581971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.915 [2024-07-12 17:12:53.661298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.850 [2024-07-12 17:12:54.335422] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:35.850 request: 00:04:35.850 { 00:04:35.850 "trtype": "tcp", 00:04:35.850 "method": "nvmf_get_transports", 00:04:35.850 "req_id": 1 00:04:35.850 } 00:04:35.850 Got JSON-RPC error response 00:04:35.850 response: 00:04:35.850 { 00:04:35.850 "code": -19, 00:04:35.850 "message": "No such device" 00:04:35.850 } 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.850 [2024-07-12 17:12:54.343517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.850 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.850 { 00:04:35.850 "subsystems": [ 00:04:35.850 { 00:04:35.850 "subsystem": "vfio_user_target", 00:04:35.850 "config": null 00:04:35.850 }, 00:04:35.850 { 00:04:35.850 "subsystem": "keyring", 00:04:35.850 "config": [] 00:04:35.850 }, 00:04:35.850 { 00:04:35.850 "subsystem": "iobuf", 00:04:35.850 "config": [ 00:04:35.850 { 00:04:35.850 "method": "iobuf_set_options", 00:04:35.850 "params": { 00:04:35.850 "small_pool_count": 8192, 00:04:35.850 "large_pool_count": 1024, 00:04:35.850 "small_bufsize": 8192, 00:04:35.850 "large_bufsize": 135168 00:04:35.850 } 00:04:35.850 } 00:04:35.850 ] 00:04:35.850 }, 00:04:35.850 { 00:04:35.850 "subsystem": "sock", 00:04:35.850 "config": [ 00:04:35.850 { 00:04:35.850 "method": "sock_set_default_impl", 00:04:35.850 "params": { 00:04:35.850 "impl_name": "posix" 00:04:35.850 } 00:04:35.850 }, 00:04:35.850 { 00:04:35.850 "method": "sock_impl_set_options", 00:04:35.850 "params": { 00:04:35.850 "impl_name": "ssl", 00:04:35.850 "recv_buf_size": 4096, 00:04:35.851 "send_buf_size": 4096, 00:04:35.851 "enable_recv_pipe": true, 00:04:35.851 "enable_quickack": false, 00:04:35.851 "enable_placement_id": 0, 00:04:35.851 "enable_zerocopy_send_server": true, 00:04:35.851 "enable_zerocopy_send_client": false, 00:04:35.851 "zerocopy_threshold": 0, 00:04:35.851 "tls_version": 0, 00:04:35.851 "enable_ktls": false 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "sock_impl_set_options", 00:04:35.851 "params": { 00:04:35.851 "impl_name": "posix", 00:04:35.851 "recv_buf_size": 2097152, 00:04:35.851 "send_buf_size": 2097152, 00:04:35.851 "enable_recv_pipe": true, 00:04:35.851 "enable_quickack": false, 00:04:35.851 "enable_placement_id": 0, 00:04:35.851 "enable_zerocopy_send_server": true, 00:04:35.851 "enable_zerocopy_send_client": false, 00:04:35.851 "zerocopy_threshold": 0, 00:04:35.851 "tls_version": 0, 00:04:35.851 "enable_ktls": false 00:04:35.851 } 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "vmd", 00:04:35.851 "config": [] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "accel", 00:04:35.851 "config": [ 00:04:35.851 { 00:04:35.851 "method": "accel_set_options", 00:04:35.851 "params": { 00:04:35.851 "small_cache_size": 128, 00:04:35.851 "large_cache_size": 16, 00:04:35.851 "task_count": 2048, 00:04:35.851 "sequence_count": 2048, 00:04:35.851 "buf_count": 2048 00:04:35.851 } 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "bdev", 00:04:35.851 "config": [ 00:04:35.851 { 00:04:35.851 "method": "bdev_set_options", 00:04:35.851 "params": { 00:04:35.851 "bdev_io_pool_size": 65535, 00:04:35.851 "bdev_io_cache_size": 256, 00:04:35.851 "bdev_auto_examine": true, 00:04:35.851 "iobuf_small_cache_size": 128, 00:04:35.851 "iobuf_large_cache_size": 16 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "bdev_raid_set_options", 00:04:35.851 "params": { 00:04:35.851 "process_window_size_kb": 1024 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "bdev_iscsi_set_options", 00:04:35.851 "params": { 00:04:35.851 "timeout_sec": 30 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "bdev_nvme_set_options", 00:04:35.851 "params": { 00:04:35.851 "action_on_timeout": "none", 00:04:35.851 "timeout_us": 0, 00:04:35.851 "timeout_admin_us": 0, 00:04:35.851 "keep_alive_timeout_ms": 10000, 00:04:35.851 "arbitration_burst": 0, 00:04:35.851 "low_priority_weight": 0, 00:04:35.851 "medium_priority_weight": 0, 00:04:35.851 "high_priority_weight": 0, 00:04:35.851 "nvme_adminq_poll_period_us": 10000, 00:04:35.851 "nvme_ioq_poll_period_us": 0, 00:04:35.851 "io_queue_requests": 0, 00:04:35.851 "delay_cmd_submit": true, 00:04:35.851 "transport_retry_count": 4, 00:04:35.851 "bdev_retry_count": 3, 00:04:35.851 "transport_ack_timeout": 0, 00:04:35.851 "ctrlr_loss_timeout_sec": 0, 00:04:35.851 "reconnect_delay_sec": 0, 00:04:35.851 "fast_io_fail_timeout_sec": 0, 00:04:35.851 "disable_auto_failback": false, 00:04:35.851 "generate_uuids": false, 00:04:35.851 "transport_tos": 0, 00:04:35.851 "nvme_error_stat": false, 00:04:35.851 "rdma_srq_size": 0, 00:04:35.851 "io_path_stat": false, 00:04:35.851 "allow_accel_sequence": false, 00:04:35.851 "rdma_max_cq_size": 0, 00:04:35.851 "rdma_cm_event_timeout_ms": 0, 00:04:35.851 "dhchap_digests": [ 00:04:35.851 "sha256", 00:04:35.851 "sha384", 00:04:35.851 "sha512" 00:04:35.851 ], 00:04:35.851 "dhchap_dhgroups": [ 00:04:35.851 "null", 00:04:35.851 "ffdhe2048", 00:04:35.851 "ffdhe3072", 00:04:35.851 "ffdhe4096", 00:04:35.851 "ffdhe6144", 00:04:35.851 "ffdhe8192" 00:04:35.851 ] 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "bdev_nvme_set_hotplug", 00:04:35.851 "params": { 00:04:35.851 "period_us": 100000, 00:04:35.851 "enable": false 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "bdev_wait_for_examine" 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "scsi", 00:04:35.851 "config": null 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "scheduler", 00:04:35.851 "config": [ 00:04:35.851 { 00:04:35.851 "method": "framework_set_scheduler", 00:04:35.851 "params": { 00:04:35.851 "name": "static" 00:04:35.851 } 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "vhost_scsi", 00:04:35.851 "config": [] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "vhost_blk", 00:04:35.851 "config": [] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "ublk", 00:04:35.851 "config": [] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "nbd", 00:04:35.851 "config": [] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "nvmf", 00:04:35.851 "config": [ 00:04:35.851 { 00:04:35.851 "method": "nvmf_set_config", 00:04:35.851 "params": { 00:04:35.851 "discovery_filter": "match_any", 00:04:35.851 "admin_cmd_passthru": { 00:04:35.851 "identify_ctrlr": false 00:04:35.851 } 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "nvmf_set_max_subsystems", 00:04:35.851 "params": { 00:04:35.851 "max_subsystems": 1024 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "nvmf_set_crdt", 00:04:35.851 "params": { 00:04:35.851 "crdt1": 0, 00:04:35.851 "crdt2": 0, 00:04:35.851 "crdt3": 0 00:04:35.851 } 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "method": "nvmf_create_transport", 00:04:35.851 "params": { 00:04:35.851 "trtype": "TCP", 00:04:35.851 "max_queue_depth": 128, 00:04:35.851 "max_io_qpairs_per_ctrlr": 127, 00:04:35.851 "in_capsule_data_size": 4096, 00:04:35.851 "max_io_size": 131072, 00:04:35.851 "io_unit_size": 131072, 00:04:35.851 "max_aq_depth": 128, 00:04:35.851 "num_shared_buffers": 511, 00:04:35.851 "buf_cache_size": 4294967295, 00:04:35.851 "dif_insert_or_strip": false, 00:04:35.851 "zcopy": false, 00:04:35.851 "c2h_success": true, 00:04:35.851 "sock_priority": 0, 00:04:35.851 "abort_timeout_sec": 1, 00:04:35.851 "ack_timeout": 0, 00:04:35.851 "data_wr_pool_size": 0 00:04:35.851 } 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 }, 00:04:35.851 { 00:04:35.851 "subsystem": "iscsi", 00:04:35.851 "config": [ 00:04:35.851 { 00:04:35.851 "method": "iscsi_set_options", 00:04:35.851 "params": { 00:04:35.851 "node_base": "iqn.2016-06.io.spdk", 00:04:35.851 "max_sessions": 128, 00:04:35.851 "max_connections_per_session": 2, 00:04:35.851 "max_queue_depth": 64, 00:04:35.851 "default_time2wait": 2, 00:04:35.851 "default_time2retain": 20, 00:04:35.851 "first_burst_length": 8192, 00:04:35.851 "immediate_data": true, 00:04:35.851 "allow_duplicated_isid": false, 00:04:35.851 "error_recovery_level": 0, 00:04:35.851 "nop_timeout": 60, 00:04:35.851 "nop_in_interval": 30, 00:04:35.851 "disable_chap": false, 00:04:35.851 "require_chap": false, 00:04:35.851 "mutual_chap": false, 00:04:35.851 "chap_group": 0, 00:04:35.851 "max_large_datain_per_connection": 64, 00:04:35.851 "max_r2t_per_connection": 4, 00:04:35.851 "pdu_pool_size": 36864, 00:04:35.851 "immediate_data_pool_size": 16384, 00:04:35.851 "data_out_pool_size": 2048 00:04:35.851 } 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 } 00:04:35.851 ] 00:04:35.851 } 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3885180 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3885180 ']' 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3885180 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885180 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.851 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.852 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885180' 00:04:35.852 killing process with pid 3885180 00:04:35.852 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3885180 00:04:35.852 17:12:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3885180 00:04:36.110 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3885425 00:04:36.110 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.110 17:12:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3885425 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3885425 ']' 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3885425 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885425 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885425' 00:04:41.378 killing process with pid 3885425 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3885425 00:04:41.378 17:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3885425 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:41.638 00:04:41.638 real 0m6.744s 00:04:41.638 user 0m6.588s 00:04:41.638 sys 0m0.580s 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.638 ************************************ 00:04:41.638 END TEST skip_rpc_with_json 00:04:41.638 ************************************ 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.638 17:13:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.638 ************************************ 00:04:41.638 START TEST skip_rpc_with_delay 00:04:41.638 ************************************ 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.638 [2024-07-12 17:13:00.338819] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.638 [2024-07-12 17:13:00.338881] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:41.638 00:04:41.638 real 0m0.066s 00:04:41.638 user 0m0.042s 00:04:41.638 sys 0m0.023s 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.638 17:13:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:41.638 ************************************ 00:04:41.638 END TEST skip_rpc_with_delay 00:04:41.638 ************************************ 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.638 17:13:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.638 17:13:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.638 17:13:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.638 17:13:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.897 ************************************ 00:04:41.897 START TEST exit_on_failed_rpc_init 00:04:41.897 ************************************ 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3886396 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3886396 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3886396 ']' 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.897 17:13:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.897 [2024-07-12 17:13:00.466614] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:41.897 [2024-07-12 17:13:00.466657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886396 ] 00:04:41.897 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.897 [2024-07-12 17:13:00.519246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.897 [2024-07-12 17:13:00.598978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.833 [2024-07-12 17:13:01.311446] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:42.833 [2024-07-12 17:13:01.311488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886628 ] 00:04:42.833 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.833 [2024-07-12 17:13:01.364167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.833 [2024-07-12 17:13:01.436902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.833 [2024-07-12 17:13:01.436970] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.833 [2024-07-12 17:13:01.436979] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.833 [2024-07-12 17:13:01.436984] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3886396 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3886396 ']' 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3886396 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3886396 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3886396' 00:04:42.833 killing process with pid 3886396 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3886396 00:04:42.833 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3886396 00:04:43.091 00:04:43.091 real 0m1.444s 00:04:43.091 user 0m1.659s 00:04:43.091 sys 0m0.398s 00:04:43.091 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.091 17:13:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.091 ************************************ 00:04:43.091 END TEST exit_on_failed_rpc_init 00:04:43.091 ************************************ 00:04:43.351 17:13:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.351 17:13:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.351 00:04:43.351 real 0m13.971s 00:04:43.351 user 0m13.564s 00:04:43.351 sys 0m1.497s 00:04:43.351 17:13:01 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.351 17:13:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 END TEST skip_rpc 00:04:43.351 ************************************ 00:04:43.351 17:13:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.351 17:13:01 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.351 17:13:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.351 17:13:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.351 17:13:01 -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 START TEST rpc_client 00:04:43.351 ************************************ 00:04:43.351 17:13:01 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.351 * Looking for test storage... 00:04:43.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:43.351 17:13:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:43.351 OK 00:04:43.351 17:13:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.351 00:04:43.351 real 0m0.101s 00:04:43.351 user 0m0.038s 00:04:43.351 sys 0m0.070s 00:04:43.351 17:13:02 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.351 17:13:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 END TEST rpc_client 00:04:43.351 ************************************ 00:04:43.351 17:13:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.351 17:13:02 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.351 17:13:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.351 17:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.351 17:13:02 -- common/autotest_common.sh@10 -- # set +x 00:04:43.610 ************************************ 00:04:43.610 START TEST json_config 00:04:43.610 ************************************ 00:04:43.610 17:13:02 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.610 17:13:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.610 17:13:02 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:43.611 17:13:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.611 17:13:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.611 17:13:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.611 17:13:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.611 17:13:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.611 17:13:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.611 17:13:02 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.611 17:13:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@47 -- # : 0 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.611 17:13:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:43.611 INFO: JSON configuration test init 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.611 17:13:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:43.611 17:13:02 json_config -- json_config/common.sh@9 -- # local app=target 00:04:43.611 17:13:02 json_config -- json_config/common.sh@10 -- # shift 00:04:43.611 17:13:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.611 17:13:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.611 17:13:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.611 17:13:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.611 17:13:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.611 17:13:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3886755 00:04:43.611 17:13:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.611 Waiting for target to run... 00:04:43.611 17:13:02 json_config -- json_config/common.sh@25 -- # waitforlisten 3886755 /var/tmp/spdk_tgt.sock 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 3886755 ']' 00:04:43.611 17:13:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.611 17:13:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.611 [2024-07-12 17:13:02.284959] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:43.611 [2024-07-12 17:13:02.285011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886755 ] 00:04:43.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.870 [2024-07-12 17:13:02.561048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.870 [2024-07-12 17:13:02.629082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:44.436 17:13:03 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.436 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.436 17:13:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:44.436 17:13:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:44.436 17:13:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:47.721 17:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:47.721 17:13:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:47.721 17:13:06 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.721 17:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.980 MallocForNvmf0 00:04:47.980 17:13:06 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.980 17:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.980 MallocForNvmf1 00:04:47.980 17:13:06 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.980 17:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.238 [2024-07-12 17:13:06.900981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.238 17:13:06 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.238 17:13:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.497 17:13:07 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.497 17:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.497 17:13:07 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.497 17:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.755 17:13:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.755 17:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.013 [2024-07-12 17:13:07.563156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.013 17:13:07 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:49.013 17:13:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.013 17:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.013 17:13:07 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:49.013 17:13:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.013 17:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.013 17:13:07 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:49.013 17:13:07 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.013 17:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.271 MallocBdevForConfigChangeCheck 00:04:49.271 17:13:07 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:49.271 17:13:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.271 17:13:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.271 17:13:07 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:49.271 17:13:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.530 17:13:08 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:49.530 INFO: shutting down applications... 00:04:49.530 17:13:08 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:49.530 17:13:08 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:49.530 17:13:08 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:49.530 17:13:08 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:51.433 Calling clear_iscsi_subsystem 00:04:51.433 Calling clear_nvmf_subsystem 00:04:51.433 Calling clear_nbd_subsystem 00:04:51.433 Calling clear_ublk_subsystem 00:04:51.433 Calling clear_vhost_blk_subsystem 00:04:51.433 Calling clear_vhost_scsi_subsystem 00:04:51.433 Calling clear_bdev_subsystem 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:51.433 17:13:09 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:51.433 17:13:10 json_config -- json_config/json_config.sh@345 -- # break 00:04:51.433 17:13:10 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:51.433 17:13:10 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:51.433 17:13:10 json_config -- json_config/common.sh@31 -- # local app=target 00:04:51.433 17:13:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.433 17:13:10 json_config -- json_config/common.sh@35 -- # [[ -n 3886755 ]] 00:04:51.433 17:13:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3886755 00:04:51.433 17:13:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.433 17:13:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.433 17:13:10 json_config -- json_config/common.sh@41 -- # kill -0 3886755 00:04:51.433 17:13:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.000 17:13:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.000 17:13:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.000 17:13:10 json_config -- json_config/common.sh@41 -- # kill -0 3886755 00:04:52.000 17:13:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.000 17:13:10 json_config -- json_config/common.sh@43 -- # break 00:04:52.000 17:13:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.000 17:13:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.000 SPDK target shutdown done 00:04:52.000 17:13:10 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:52.000 INFO: relaunching applications... 00:04:52.000 17:13:10 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.000 17:13:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:52.000 17:13:10 json_config -- json_config/common.sh@10 -- # shift 00:04:52.000 17:13:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.000 17:13:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.000 17:13:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.000 17:13:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.000 17:13:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.000 17:13:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3888315 00:04:52.000 17:13:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.000 Waiting for target to run... 00:04:52.000 17:13:10 json_config -- json_config/common.sh@25 -- # waitforlisten 3888315 /var/tmp/spdk_tgt.sock 00:04:52.000 17:13:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@829 -- # '[' -z 3888315 ']' 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.000 17:13:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.000 [2024-07-12 17:13:10.606286] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:52.000 [2024-07-12 17:13:10.606344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888315 ] 00:04:52.000 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.258 [2024-07-12 17:13:10.878779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.258 [2024-07-12 17:13:10.945932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.629 [2024-07-12 17:13:13.959823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.629 [2024-07-12 17:13:13.992128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.629 17:13:14 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.629 17:13:14 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:55.629 17:13:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:55.629 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.629 INFO: Checking if target configuration is the same... 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:55.629 17:13:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.629 + '[' 2 -ne 2 ']' 00:04:55.629 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.629 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.629 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.629 +++ basename /dev/fd/62 00:04:55.629 ++ mktemp /tmp/62.XXX 00:04:55.629 + tmp_file_1=/tmp/62.0mE 00:04:55.629 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.629 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.629 + tmp_file_2=/tmp/spdk_tgt_config.json.KYT 00:04:55.629 + ret=0 00:04:55.629 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.629 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.629 + diff -u /tmp/62.0mE /tmp/spdk_tgt_config.json.KYT 00:04:55.629 + echo 'INFO: JSON config files are the same' 00:04:55.629 INFO: JSON config files are the same 00:04:55.629 + rm /tmp/62.0mE /tmp/spdk_tgt_config.json.KYT 00:04:55.629 + exit 0 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.629 INFO: changing configuration and checking if this can be detected... 00:04:55.629 17:13:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.629 17:13:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.888 17:13:14 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.888 17:13:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:55.888 17:13:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.888 + '[' 2 -ne 2 ']' 00:04:55.888 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.888 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.888 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.888 +++ basename /dev/fd/62 00:04:55.888 ++ mktemp /tmp/62.XXX 00:04:55.888 + tmp_file_1=/tmp/62.Fol 00:04:55.888 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.888 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.888 + tmp_file_2=/tmp/spdk_tgt_config.json.gwe 00:04:55.888 + ret=0 00:04:55.888 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.405 + diff -u /tmp/62.Fol /tmp/spdk_tgt_config.json.gwe 00:04:56.405 + ret=1 00:04:56.405 + echo '=== Start of file: /tmp/62.Fol ===' 00:04:56.405 + cat /tmp/62.Fol 00:04:56.405 + echo '=== End of file: /tmp/62.Fol ===' 00:04:56.405 + echo '' 00:04:56.405 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gwe ===' 00:04:56.405 + cat /tmp/spdk_tgt_config.json.gwe 00:04:56.405 + echo '=== End of file: /tmp/spdk_tgt_config.json.gwe ===' 00:04:56.405 + echo '' 00:04:56.405 + rm /tmp/62.Fol /tmp/spdk_tgt_config.json.gwe 00:04:56.405 + exit 1 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:56.405 INFO: configuration change detected. 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@317 -- # [[ -n 3888315 ]] 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.405 17:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.405 17:13:14 json_config -- json_config/json_config.sh@323 -- # killprocess 3888315 00:04:56.406 17:13:14 json_config -- common/autotest_common.sh@948 -- # '[' -z 3888315 ']' 00:04:56.406 17:13:14 json_config -- common/autotest_common.sh@952 -- # kill -0 3888315 00:04:56.406 17:13:14 json_config -- common/autotest_common.sh@953 -- # uname 00:04:56.406 17:13:14 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.406 17:13:14 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888315 00:04:56.406 17:13:15 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.406 17:13:15 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.406 17:13:15 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888315' 00:04:56.406 killing process with pid 3888315 00:04:56.406 17:13:15 json_config -- common/autotest_common.sh@967 -- # kill 3888315 00:04:56.406 17:13:15 json_config -- common/autotest_common.sh@972 -- # wait 3888315 00:04:57.781 17:13:16 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.040 17:13:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:58.040 17:13:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.040 17:13:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 17:13:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:58.040 17:13:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:58.040 INFO: Success 00:04:58.040 00:04:58.040 real 0m14.462s 00:04:58.040 user 0m15.267s 00:04:58.040 sys 0m1.657s 00:04:58.040 17:13:16 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.040 17:13:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 ************************************ 00:04:58.040 END TEST json_config 00:04:58.040 ************************************ 00:04:58.040 17:13:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.040 17:13:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.040 17:13:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.040 17:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.040 17:13:16 -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 ************************************ 00:04:58.040 START TEST json_config_extra_key 00:04:58.040 ************************************ 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.040 17:13:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.040 17:13:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.040 17:13:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.040 17:13:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.040 17:13:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.040 17:13:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.040 17:13:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.040 17:13:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:58.040 17:13:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.040 INFO: launching applications... 00:04:58.040 17:13:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3889515 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.040 Waiting for target to run... 00:04:58.040 17:13:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3889515 /var/tmp/spdk_tgt.sock 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3889515 ']' 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.040 17:13:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 [2024-07-12 17:13:16.758211] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:58.040 [2024-07-12 17:13:16.758259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889515 ] 00:04:58.040 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.299 [2024-07-12 17:13:17.023048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.557 [2024-07-12 17:13:17.091047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.817 17:13:17 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.817 17:13:17 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:58.817 00:04:58.817 17:13:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:58.817 INFO: shutting down applications... 00:04:58.817 17:13:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3889515 ]] 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3889515 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3889515 00:04:58.817 17:13:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3889515 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.385 17:13:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.385 SPDK target shutdown done 00:04:59.385 17:13:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.385 Success 00:04:59.385 00:04:59.385 real 0m1.422s 00:04:59.385 user 0m1.228s 00:04:59.385 sys 0m0.341s 00:04:59.385 17:13:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.385 17:13:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.385 ************************************ 00:04:59.385 END TEST json_config_extra_key 00:04:59.385 ************************************ 00:04:59.385 17:13:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:59.385 17:13:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.385 17:13:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.385 17:13:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.385 17:13:18 -- common/autotest_common.sh@10 -- # set +x 00:04:59.385 ************************************ 00:04:59.385 START TEST alias_rpc 00:04:59.385 ************************************ 00:04:59.385 17:13:18 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.644 * Looking for test storage... 00:04:59.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:59.644 17:13:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.644 17:13:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3889799 00:04:59.644 17:13:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3889799 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3889799 ']' 00:04:59.644 17:13:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.644 17:13:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.644 [2024-07-12 17:13:18.274316] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:04:59.644 [2024-07-12 17:13:18.274361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889799 ] 00:04:59.644 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.644 [2024-07-12 17:13:18.328505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.644 [2024-07-12 17:13:18.408373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.580 17:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.580 17:13:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3889799 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3889799 ']' 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3889799 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3889799 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3889799' 00:05:00.580 killing process with pid 3889799 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 3889799 00:05:00.580 17:13:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 3889799 00:05:01.147 00:05:01.147 real 0m1.473s 00:05:01.147 user 0m1.629s 00:05:01.147 sys 0m0.372s 00:05:01.147 17:13:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.147 17:13:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.147 ************************************ 00:05:01.147 END TEST alias_rpc 00:05:01.147 ************************************ 00:05:01.147 17:13:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.147 17:13:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:01.147 17:13:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.147 17:13:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.147 17:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.147 17:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:01.147 ************************************ 00:05:01.147 START TEST spdkcli_tcp 00:05:01.147 ************************************ 00:05:01.147 17:13:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.147 * Looking for test storage... 00:05:01.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.147 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.147 17:13:19 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.147 17:13:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.148 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3890083 00:05:01.148 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3890083 00:05:01.148 17:13:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3890083 ']' 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.148 17:13:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.148 [2024-07-12 17:13:19.823492] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:01.148 [2024-07-12 17:13:19.823540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890083 ] 00:05:01.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.148 [2024-07-12 17:13:19.876205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.406 [2024-07-12 17:13:19.951407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.406 [2024-07-12 17:13:19.951410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.971 17:13:20 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.971 17:13:20 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:01.971 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3890314 00:05:01.971 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.971 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.231 [ 00:05:02.231 "bdev_malloc_delete", 00:05:02.231 "bdev_malloc_create", 00:05:02.231 "bdev_null_resize", 00:05:02.231 "bdev_null_delete", 00:05:02.231 "bdev_null_create", 00:05:02.231 "bdev_nvme_cuse_unregister", 00:05:02.231 "bdev_nvme_cuse_register", 00:05:02.231 "bdev_opal_new_user", 00:05:02.231 "bdev_opal_set_lock_state", 00:05:02.231 "bdev_opal_delete", 00:05:02.231 "bdev_opal_get_info", 00:05:02.231 "bdev_opal_create", 00:05:02.231 "bdev_nvme_opal_revert", 00:05:02.231 "bdev_nvme_opal_init", 00:05:02.231 "bdev_nvme_send_cmd", 00:05:02.231 "bdev_nvme_get_path_iostat", 00:05:02.231 "bdev_nvme_get_mdns_discovery_info", 00:05:02.231 "bdev_nvme_stop_mdns_discovery", 00:05:02.231 "bdev_nvme_start_mdns_discovery", 00:05:02.231 "bdev_nvme_set_multipath_policy", 00:05:02.231 "bdev_nvme_set_preferred_path", 00:05:02.231 "bdev_nvme_get_io_paths", 00:05:02.231 "bdev_nvme_remove_error_injection", 00:05:02.231 "bdev_nvme_add_error_injection", 00:05:02.231 "bdev_nvme_get_discovery_info", 00:05:02.231 "bdev_nvme_stop_discovery", 00:05:02.231 "bdev_nvme_start_discovery", 00:05:02.231 "bdev_nvme_get_controller_health_info", 00:05:02.231 "bdev_nvme_disable_controller", 00:05:02.231 "bdev_nvme_enable_controller", 00:05:02.231 "bdev_nvme_reset_controller", 00:05:02.231 "bdev_nvme_get_transport_statistics", 00:05:02.231 "bdev_nvme_apply_firmware", 00:05:02.231 "bdev_nvme_detach_controller", 00:05:02.231 "bdev_nvme_get_controllers", 00:05:02.231 "bdev_nvme_attach_controller", 00:05:02.231 "bdev_nvme_set_hotplug", 00:05:02.231 "bdev_nvme_set_options", 00:05:02.231 "bdev_passthru_delete", 00:05:02.231 "bdev_passthru_create", 00:05:02.231 "bdev_lvol_set_parent_bdev", 00:05:02.231 "bdev_lvol_set_parent", 00:05:02.231 "bdev_lvol_check_shallow_copy", 00:05:02.231 "bdev_lvol_start_shallow_copy", 00:05:02.231 "bdev_lvol_grow_lvstore", 00:05:02.231 "bdev_lvol_get_lvols", 00:05:02.231 "bdev_lvol_get_lvstores", 00:05:02.231 "bdev_lvol_delete", 00:05:02.231 "bdev_lvol_set_read_only", 00:05:02.231 "bdev_lvol_resize", 00:05:02.231 "bdev_lvol_decouple_parent", 00:05:02.231 "bdev_lvol_inflate", 00:05:02.231 "bdev_lvol_rename", 00:05:02.231 "bdev_lvol_clone_bdev", 00:05:02.231 "bdev_lvol_clone", 00:05:02.231 "bdev_lvol_snapshot", 00:05:02.231 "bdev_lvol_create", 00:05:02.231 "bdev_lvol_delete_lvstore", 00:05:02.231 "bdev_lvol_rename_lvstore", 00:05:02.231 "bdev_lvol_create_lvstore", 00:05:02.231 "bdev_raid_set_options", 00:05:02.231 "bdev_raid_remove_base_bdev", 00:05:02.231 "bdev_raid_add_base_bdev", 00:05:02.231 "bdev_raid_delete", 00:05:02.231 "bdev_raid_create", 00:05:02.231 "bdev_raid_get_bdevs", 00:05:02.231 "bdev_error_inject_error", 00:05:02.231 "bdev_error_delete", 00:05:02.231 "bdev_error_create", 00:05:02.231 "bdev_split_delete", 00:05:02.231 "bdev_split_create", 00:05:02.231 "bdev_delay_delete", 00:05:02.231 "bdev_delay_create", 00:05:02.231 "bdev_delay_update_latency", 00:05:02.231 "bdev_zone_block_delete", 00:05:02.231 "bdev_zone_block_create", 00:05:02.231 "blobfs_create", 00:05:02.231 "blobfs_detect", 00:05:02.231 "blobfs_set_cache_size", 00:05:02.231 "bdev_aio_delete", 00:05:02.231 "bdev_aio_rescan", 00:05:02.231 "bdev_aio_create", 00:05:02.231 "bdev_ftl_set_property", 00:05:02.231 "bdev_ftl_get_properties", 00:05:02.231 "bdev_ftl_get_stats", 00:05:02.231 "bdev_ftl_unmap", 00:05:02.231 "bdev_ftl_unload", 00:05:02.231 "bdev_ftl_delete", 00:05:02.231 "bdev_ftl_load", 00:05:02.231 "bdev_ftl_create", 00:05:02.231 "bdev_virtio_attach_controller", 00:05:02.231 "bdev_virtio_scsi_get_devices", 00:05:02.231 "bdev_virtio_detach_controller", 00:05:02.231 "bdev_virtio_blk_set_hotplug", 00:05:02.231 "bdev_iscsi_delete", 00:05:02.231 "bdev_iscsi_create", 00:05:02.231 "bdev_iscsi_set_options", 00:05:02.231 "accel_error_inject_error", 00:05:02.231 "ioat_scan_accel_module", 00:05:02.231 "dsa_scan_accel_module", 00:05:02.231 "iaa_scan_accel_module", 00:05:02.231 "vfu_virtio_create_scsi_endpoint", 00:05:02.231 "vfu_virtio_scsi_remove_target", 00:05:02.231 "vfu_virtio_scsi_add_target", 00:05:02.231 "vfu_virtio_create_blk_endpoint", 00:05:02.231 "vfu_virtio_delete_endpoint", 00:05:02.231 "keyring_file_remove_key", 00:05:02.231 "keyring_file_add_key", 00:05:02.231 "keyring_linux_set_options", 00:05:02.231 "iscsi_get_histogram", 00:05:02.231 "iscsi_enable_histogram", 00:05:02.231 "iscsi_set_options", 00:05:02.231 "iscsi_get_auth_groups", 00:05:02.231 "iscsi_auth_group_remove_secret", 00:05:02.231 "iscsi_auth_group_add_secret", 00:05:02.231 "iscsi_delete_auth_group", 00:05:02.231 "iscsi_create_auth_group", 00:05:02.231 "iscsi_set_discovery_auth", 00:05:02.231 "iscsi_get_options", 00:05:02.231 "iscsi_target_node_request_logout", 00:05:02.231 "iscsi_target_node_set_redirect", 00:05:02.231 "iscsi_target_node_set_auth", 00:05:02.231 "iscsi_target_node_add_lun", 00:05:02.231 "iscsi_get_stats", 00:05:02.231 "iscsi_get_connections", 00:05:02.231 "iscsi_portal_group_set_auth", 00:05:02.231 "iscsi_start_portal_group", 00:05:02.231 "iscsi_delete_portal_group", 00:05:02.231 "iscsi_create_portal_group", 00:05:02.231 "iscsi_get_portal_groups", 00:05:02.231 "iscsi_delete_target_node", 00:05:02.231 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.231 "iscsi_target_node_add_pg_ig_maps", 00:05:02.231 "iscsi_create_target_node", 00:05:02.231 "iscsi_get_target_nodes", 00:05:02.231 "iscsi_delete_initiator_group", 00:05:02.231 "iscsi_initiator_group_remove_initiators", 00:05:02.231 "iscsi_initiator_group_add_initiators", 00:05:02.231 "iscsi_create_initiator_group", 00:05:02.231 "iscsi_get_initiator_groups", 00:05:02.231 "nvmf_set_crdt", 00:05:02.231 "nvmf_set_config", 00:05:02.231 "nvmf_set_max_subsystems", 00:05:02.231 "nvmf_stop_mdns_prr", 00:05:02.231 "nvmf_publish_mdns_prr", 00:05:02.231 "nvmf_subsystem_get_listeners", 00:05:02.231 "nvmf_subsystem_get_qpairs", 00:05:02.231 "nvmf_subsystem_get_controllers", 00:05:02.231 "nvmf_get_stats", 00:05:02.231 "nvmf_get_transports", 00:05:02.231 "nvmf_create_transport", 00:05:02.231 "nvmf_get_targets", 00:05:02.231 "nvmf_delete_target", 00:05:02.231 "nvmf_create_target", 00:05:02.231 "nvmf_subsystem_allow_any_host", 00:05:02.231 "nvmf_subsystem_remove_host", 00:05:02.231 "nvmf_subsystem_add_host", 00:05:02.231 "nvmf_ns_remove_host", 00:05:02.231 "nvmf_ns_add_host", 00:05:02.231 "nvmf_subsystem_remove_ns", 00:05:02.231 "nvmf_subsystem_add_ns", 00:05:02.231 "nvmf_subsystem_listener_set_ana_state", 00:05:02.231 "nvmf_discovery_get_referrals", 00:05:02.231 "nvmf_discovery_remove_referral", 00:05:02.231 "nvmf_discovery_add_referral", 00:05:02.231 "nvmf_subsystem_remove_listener", 00:05:02.231 "nvmf_subsystem_add_listener", 00:05:02.231 "nvmf_delete_subsystem", 00:05:02.231 "nvmf_create_subsystem", 00:05:02.231 "nvmf_get_subsystems", 00:05:02.231 "env_dpdk_get_mem_stats", 00:05:02.231 "nbd_get_disks", 00:05:02.231 "nbd_stop_disk", 00:05:02.231 "nbd_start_disk", 00:05:02.231 "ublk_recover_disk", 00:05:02.231 "ublk_get_disks", 00:05:02.231 "ublk_stop_disk", 00:05:02.231 "ublk_start_disk", 00:05:02.231 "ublk_destroy_target", 00:05:02.231 "ublk_create_target", 00:05:02.231 "virtio_blk_create_transport", 00:05:02.231 "virtio_blk_get_transports", 00:05:02.231 "vhost_controller_set_coalescing", 00:05:02.231 "vhost_get_controllers", 00:05:02.231 "vhost_delete_controller", 00:05:02.231 "vhost_create_blk_controller", 00:05:02.231 "vhost_scsi_controller_remove_target", 00:05:02.231 "vhost_scsi_controller_add_target", 00:05:02.231 "vhost_start_scsi_controller", 00:05:02.231 "vhost_create_scsi_controller", 00:05:02.231 "thread_set_cpumask", 00:05:02.231 "framework_get_governor", 00:05:02.231 "framework_get_scheduler", 00:05:02.231 "framework_set_scheduler", 00:05:02.231 "framework_get_reactors", 00:05:02.231 "thread_get_io_channels", 00:05:02.231 "thread_get_pollers", 00:05:02.231 "thread_get_stats", 00:05:02.231 "framework_monitor_context_switch", 00:05:02.231 "spdk_kill_instance", 00:05:02.231 "log_enable_timestamps", 00:05:02.231 "log_get_flags", 00:05:02.231 "log_clear_flag", 00:05:02.231 "log_set_flag", 00:05:02.231 "log_get_level", 00:05:02.231 "log_set_level", 00:05:02.232 "log_get_print_level", 00:05:02.232 "log_set_print_level", 00:05:02.232 "framework_enable_cpumask_locks", 00:05:02.232 "framework_disable_cpumask_locks", 00:05:02.232 "framework_wait_init", 00:05:02.232 "framework_start_init", 00:05:02.232 "scsi_get_devices", 00:05:02.232 "bdev_get_histogram", 00:05:02.232 "bdev_enable_histogram", 00:05:02.232 "bdev_set_qos_limit", 00:05:02.232 "bdev_set_qd_sampling_period", 00:05:02.232 "bdev_get_bdevs", 00:05:02.232 "bdev_reset_iostat", 00:05:02.232 "bdev_get_iostat", 00:05:02.232 "bdev_examine", 00:05:02.232 "bdev_wait_for_examine", 00:05:02.232 "bdev_set_options", 00:05:02.232 "notify_get_notifications", 00:05:02.232 "notify_get_types", 00:05:02.232 "accel_get_stats", 00:05:02.232 "accel_set_options", 00:05:02.232 "accel_set_driver", 00:05:02.232 "accel_crypto_key_destroy", 00:05:02.232 "accel_crypto_keys_get", 00:05:02.232 "accel_crypto_key_create", 00:05:02.232 "accel_assign_opc", 00:05:02.232 "accel_get_module_info", 00:05:02.232 "accel_get_opc_assignments", 00:05:02.232 "vmd_rescan", 00:05:02.232 "vmd_remove_device", 00:05:02.232 "vmd_enable", 00:05:02.232 "sock_get_default_impl", 00:05:02.232 "sock_set_default_impl", 00:05:02.232 "sock_impl_set_options", 00:05:02.232 "sock_impl_get_options", 00:05:02.232 "iobuf_get_stats", 00:05:02.232 "iobuf_set_options", 00:05:02.232 "keyring_get_keys", 00:05:02.232 "framework_get_pci_devices", 00:05:02.232 "framework_get_config", 00:05:02.232 "framework_get_subsystems", 00:05:02.232 "vfu_tgt_set_base_path", 00:05:02.232 "trace_get_info", 00:05:02.232 "trace_get_tpoint_group_mask", 00:05:02.232 "trace_disable_tpoint_group", 00:05:02.232 "trace_enable_tpoint_group", 00:05:02.232 "trace_clear_tpoint_mask", 00:05:02.232 "trace_set_tpoint_mask", 00:05:02.232 "spdk_get_version", 00:05:02.232 "rpc_get_methods" 00:05:02.232 ] 00:05:02.232 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.232 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.232 17:13:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3890083 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3890083 ']' 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3890083 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890083 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890083' 00:05:02.232 killing process with pid 3890083 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3890083 00:05:02.232 17:13:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3890083 00:05:02.490 00:05:02.490 real 0m1.475s 00:05:02.490 user 0m2.726s 00:05:02.490 sys 0m0.426s 00:05:02.490 17:13:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.490 17:13:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.490 ************************************ 00:05:02.490 END TEST spdkcli_tcp 00:05:02.490 ************************************ 00:05:02.490 17:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.490 17:13:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.490 17:13:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.490 17:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.490 17:13:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.490 ************************************ 00:05:02.490 START TEST dpdk_mem_utility 00:05:02.490 ************************************ 00:05:02.490 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.750 * Looking for test storage... 00:05:02.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.750 17:13:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.750 17:13:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3890387 00:05:02.750 17:13:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3890387 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3890387 ']' 00:05:02.750 17:13:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.750 17:13:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.750 [2024-07-12 17:13:21.353005] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:02.750 [2024-07-12 17:13:21.353048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890387 ] 00:05:02.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.750 [2024-07-12 17:13:21.407393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.750 [2024-07-12 17:13:21.486911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.686 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.686 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:03.686 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.686 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.686 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.686 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.686 { 00:05:03.686 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.686 } 00:05:03.686 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.686 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.686 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:03.686 1 heaps totaling size 814.000000 MiB 00:05:03.686 size: 814.000000 MiB heap id: 0 00:05:03.686 end heaps---------- 00:05:03.686 8 mempools totaling size 598.116089 MiB 00:05:03.686 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.686 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.686 size: 84.521057 MiB name: bdev_io_3890387 00:05:03.686 size: 51.011292 MiB name: evtpool_3890387 00:05:03.686 size: 50.003479 MiB name: msgpool_3890387 00:05:03.686 size: 21.763794 MiB name: PDU_Pool 00:05:03.686 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.686 size: 0.026123 MiB name: Session_Pool 00:05:03.686 end mempools------- 00:05:03.686 6 memzones totaling size 4.142822 MiB 00:05:03.686 size: 1.000366 MiB name: RG_ring_0_3890387 00:05:03.686 size: 1.000366 MiB name: RG_ring_1_3890387 00:05:03.686 size: 1.000366 MiB name: RG_ring_4_3890387 00:05:03.686 size: 1.000366 MiB name: RG_ring_5_3890387 00:05:03.686 size: 0.125366 MiB name: RG_ring_2_3890387 00:05:03.686 size: 0.015991 MiB name: RG_ring_3_3890387 00:05:03.686 end memzones------- 00:05:03.686 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:03.686 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:03.686 list of free elements. size: 12.519348 MiB 00:05:03.686 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:03.686 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:03.686 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:03.686 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:03.686 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:03.686 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:03.687 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:03.687 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:03.687 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:03.687 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:03.687 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:03.687 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:03.687 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:03.687 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:03.687 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:03.687 list of standard malloc elements. size: 199.218079 MiB 00:05:03.687 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:03.687 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:03.687 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:03.687 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:03.687 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:03.687 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:03.687 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:03.687 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:03.687 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:03.687 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:03.687 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:03.687 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:03.687 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:03.687 list of memzone associated elements. size: 602.262573 MiB 00:05:03.687 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:03.687 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:03.687 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:03.687 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:03.687 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:03.687 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3890387_0 00:05:03.687 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:03.687 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3890387_0 00:05:03.687 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:03.687 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3890387_0 00:05:03.687 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:03.687 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:03.687 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:03.687 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:03.687 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:03.687 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3890387 00:05:03.687 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:03.687 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3890387 00:05:03.687 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:03.687 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3890387 00:05:03.687 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:03.687 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:03.687 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:03.687 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:03.687 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:03.687 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:03.687 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:03.687 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:03.687 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:03.687 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3890387 00:05:03.687 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:03.687 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3890387 00:05:03.687 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:03.687 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3890387 00:05:03.687 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:03.687 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3890387 00:05:03.687 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:03.687 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3890387 00:05:03.687 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:03.687 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:03.687 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:03.687 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:03.687 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:03.687 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:03.687 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:03.687 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3890387 00:05:03.687 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:03.687 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:03.687 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:03.687 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:03.687 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:03.687 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3890387 00:05:03.687 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:03.687 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:03.687 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:03.687 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3890387 00:05:03.687 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:03.687 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3890387 00:05:03.687 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:03.687 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:03.687 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:03.687 17:13:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3890387 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3890387 ']' 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3890387 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890387 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890387' 00:05:03.687 killing process with pid 3890387 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3890387 00:05:03.687 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3890387 00:05:03.946 00:05:03.946 real 0m1.380s 00:05:03.946 user 0m1.471s 00:05:03.946 sys 0m0.367s 00:05:03.946 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.946 17:13:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.946 ************************************ 00:05:03.946 END TEST dpdk_mem_utility 00:05:03.946 ************************************ 00:05:03.946 17:13:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.946 17:13:22 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.946 17:13:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.946 17:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.946 17:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.946 ************************************ 00:05:03.946 START TEST event 00:05:03.946 ************************************ 00:05:03.946 17:13:22 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.205 * Looking for test storage... 00:05:04.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.205 17:13:22 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.205 17:13:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.205 17:13:22 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.205 17:13:22 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:04.205 17:13:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.205 17:13:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.205 ************************************ 00:05:04.205 START TEST event_perf 00:05:04.205 ************************************ 00:05:04.205 17:13:22 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.205 Running I/O for 1 seconds...[2024-07-12 17:13:22.807657] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:04.205 [2024-07-12 17:13:22.807728] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890692 ] 00:05:04.205 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.205 [2024-07-12 17:13:22.864290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.205 [2024-07-12 17:13:22.940709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.205 [2024-07-12 17:13:22.940808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.205 [2024-07-12 17:13:22.940913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.205 [2024-07-12 17:13:22.940916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.581 Running I/O for 1 seconds... 00:05:05.581 lcore 0: 213227 00:05:05.581 lcore 1: 213224 00:05:05.581 lcore 2: 213225 00:05:05.581 lcore 3: 213226 00:05:05.581 done. 00:05:05.581 00:05:05.581 real 0m1.223s 00:05:05.581 user 0m4.142s 00:05:05.581 sys 0m0.078s 00:05:05.581 17:13:24 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.581 17:13:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 END TEST event_perf 00:05:05.581 ************************************ 00:05:05.581 17:13:24 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.581 17:13:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.581 17:13:24 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:05.581 17:13:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.581 17:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.581 ************************************ 00:05:05.581 START TEST event_reactor 00:05:05.581 ************************************ 00:05:05.581 17:13:24 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.581 [2024-07-12 17:13:24.098623] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:05.581 [2024-07-12 17:13:24.098690] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890931 ] 00:05:05.581 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.581 [2024-07-12 17:13:24.156874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.581 [2024-07-12 17:13:24.229517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.517 test_start 00:05:06.517 oneshot 00:05:06.517 tick 100 00:05:06.517 tick 100 00:05:06.517 tick 250 00:05:06.517 tick 100 00:05:06.517 tick 100 00:05:06.517 tick 100 00:05:06.517 tick 250 00:05:06.517 tick 500 00:05:06.517 tick 100 00:05:06.517 tick 100 00:05:06.517 tick 250 00:05:06.517 tick 100 00:05:06.517 tick 100 00:05:06.517 test_end 00:05:06.776 00:05:06.776 real 0m1.218s 00:05:06.776 user 0m1.130s 00:05:06.776 sys 0m0.084s 00:05:06.776 17:13:25 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.776 17:13:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.776 ************************************ 00:05:06.776 END TEST event_reactor 00:05:06.776 ************************************ 00:05:06.776 17:13:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:06.776 17:13:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.776 17:13:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:06.776 17:13:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.776 17:13:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.776 ************************************ 00:05:06.776 START TEST event_reactor_perf 00:05:06.776 ************************************ 00:05:06.776 17:13:25 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.776 [2024-07-12 17:13:25.370125] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:06.776 [2024-07-12 17:13:25.370174] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891183 ] 00:05:06.776 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.776 [2024-07-12 17:13:25.423246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.776 [2024-07-12 17:13:25.494489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.152 test_start 00:05:08.152 test_end 00:05:08.152 Performance: 500940 events per second 00:05:08.152 00:05:08.152 real 0m1.204s 00:05:08.152 user 0m1.137s 00:05:08.152 sys 0m0.063s 00:05:08.152 17:13:26 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.152 17:13:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.152 ************************************ 00:05:08.152 END TEST event_reactor_perf 00:05:08.152 ************************************ 00:05:08.152 17:13:26 event -- common/autotest_common.sh@1142 -- # return 0 00:05:08.152 17:13:26 event -- event/event.sh@49 -- # uname -s 00:05:08.152 17:13:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.152 17:13:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.152 17:13:26 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.152 17:13:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.152 17:13:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.152 ************************************ 00:05:08.152 START TEST event_scheduler 00:05:08.152 ************************************ 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.152 * Looking for test storage... 00:05:08.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.152 17:13:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.152 17:13:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3891455 00:05:08.152 17:13:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.152 17:13:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.152 17:13:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3891455 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3891455 ']' 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.152 17:13:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.152 [2024-07-12 17:13:26.745253] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:08.152 [2024-07-12 17:13:26.745301] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891455 ] 00:05:08.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.152 [2024-07-12 17:13:26.794763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.152 [2024-07-12 17:13:26.871512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.152 [2024-07-12 17:13:26.871534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.152 [2024-07-12 17:13:26.871619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.152 [2024-07-12 17:13:26.871621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:09.088 17:13:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 [2024-07-12 17:13:27.553970] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.088 [2024-07-12 17:13:27.553988] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.088 [2024-07-12 17:13:27.553997] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.088 [2024-07-12 17:13:27.554003] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.088 [2024-07-12 17:13:27.554008] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 [2024-07-12 17:13:27.629899] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 ************************************ 00:05:09.088 START TEST scheduler_create_thread 00:05:09.088 ************************************ 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 2 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 3 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 4 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 5 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 6 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 7 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 8 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 9 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 10 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.088 17:13:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.654 17:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.654 17:13:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.654 17:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.654 17:13:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.029 17:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.029 17:13:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.029 17:13:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.029 17:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.029 17:13:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.404 17:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.404 00:05:12.404 real 0m3.102s 00:05:12.404 user 0m0.020s 00:05:12.404 sys 0m0.007s 00:05:12.404 17:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.404 17:13:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.404 ************************************ 00:05:12.404 END TEST scheduler_create_thread 00:05:12.404 ************************************ 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:12.404 17:13:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.404 17:13:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3891455 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3891455 ']' 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3891455 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891455 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891455' 00:05:12.404 killing process with pid 3891455 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3891455 00:05:12.404 17:13:30 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3891455 00:05:12.404 [2024-07-12 17:13:31.149234] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.663 00:05:12.663 real 0m4.742s 00:05:12.663 user 0m9.259s 00:05:12.663 sys 0m0.352s 00:05:12.663 17:13:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.663 17:13:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.663 ************************************ 00:05:12.663 END TEST event_scheduler 00:05:12.663 ************************************ 00:05:12.663 17:13:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:12.663 17:13:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.663 17:13:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.663 17:13:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.663 17:13:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.663 17:13:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.663 ************************************ 00:05:12.663 START TEST app_repeat 00:05:12.663 ************************************ 00:05:12.663 17:13:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.663 17:13:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3892436 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3892436' 00:05:12.921 Process app_repeat pid: 3892436 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.921 spdk_app_start Round 0 00:05:12.921 17:13:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892436 /var/tmp/spdk-nbd.sock 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3892436 ']' 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.921 17:13:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.921 [2024-07-12 17:13:31.470382] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:12.921 [2024-07-12 17:13:31.470434] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892436 ] 00:05:12.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.921 [2024-07-12 17:13:31.526003] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.921 [2024-07-12 17:13:31.598409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.921 [2024-07-12 17:13:31.598411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.856 17:13:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.856 17:13:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.856 17:13:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.856 Malloc0 00:05:13.856 17:13:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.115 Malloc1 00:05:14.115 17:13:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.115 /dev/nbd0 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.115 17:13:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.115 17:13:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.116 1+0 records in 00:05:14.116 1+0 records out 00:05:14.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230561 s, 17.8 MB/s 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.116 17:13:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.116 17:13:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.116 17:13:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.116 17:13:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.373 /dev/nbd1 00:05:14.373 17:13:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.373 17:13:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.373 17:13:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.374 1+0 records in 00:05:14.374 1+0 records out 00:05:14.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189778 s, 21.6 MB/s 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.374 17:13:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.374 17:13:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.374 17:13:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.374 17:13:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.374 17:13:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.374 17:13:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.632 { 00:05:14.632 "nbd_device": "/dev/nbd0", 00:05:14.632 "bdev_name": "Malloc0" 00:05:14.632 }, 00:05:14.632 { 00:05:14.632 "nbd_device": "/dev/nbd1", 00:05:14.632 "bdev_name": "Malloc1" 00:05:14.632 } 00:05:14.632 ]' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.632 { 00:05:14.632 "nbd_device": "/dev/nbd0", 00:05:14.632 "bdev_name": "Malloc0" 00:05:14.632 }, 00:05:14.632 { 00:05:14.632 "nbd_device": "/dev/nbd1", 00:05:14.632 "bdev_name": "Malloc1" 00:05:14.632 } 00:05:14.632 ]' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.632 /dev/nbd1' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.632 /dev/nbd1' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.632 256+0 records in 00:05:14.632 256+0 records out 00:05:14.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100133 s, 105 MB/s 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.632 256+0 records in 00:05:14.632 256+0 records out 00:05:14.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138075 s, 75.9 MB/s 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.632 256+0 records in 00:05:14.632 256+0 records out 00:05:14.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148371 s, 70.7 MB/s 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.632 17:13:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.890 17:13:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.148 17:13:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.456 17:13:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.456 17:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.456 17:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.456 17:13:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.456 17:13:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.456 17:13:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.730 [2024-07-12 17:13:34.388292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.730 [2024-07-12 17:13:34.457054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.730 [2024-07-12 17:13:34.457057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.730 [2024-07-12 17:13:34.497556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.730 [2024-07-12 17:13:34.497595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.012 spdk_app_start Round 1 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892436 /var/tmp/spdk-nbd.sock 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3892436 ']' 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.012 17:13:37 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.012 Malloc0 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.012 Malloc1 00:05:19.012 17:13:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.012 17:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.270 /dev/nbd0 00:05:19.270 17:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.270 17:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.270 1+0 records in 00:05:19.270 1+0 records out 00:05:19.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180597 s, 22.7 MB/s 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.270 17:13:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.270 17:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.270 17:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.270 17:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.528 /dev/nbd1 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.528 1+0 records in 00:05:19.528 1+0 records out 00:05:19.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157437 s, 26.0 MB/s 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.528 17:13:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.528 17:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.803 { 00:05:19.803 "nbd_device": "/dev/nbd0", 00:05:19.803 "bdev_name": "Malloc0" 00:05:19.803 }, 00:05:19.803 { 00:05:19.803 "nbd_device": "/dev/nbd1", 00:05:19.803 "bdev_name": "Malloc1" 00:05:19.803 } 00:05:19.803 ]' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.803 { 00:05:19.803 "nbd_device": "/dev/nbd0", 00:05:19.803 "bdev_name": "Malloc0" 00:05:19.803 }, 00:05:19.803 { 00:05:19.803 "nbd_device": "/dev/nbd1", 00:05:19.803 "bdev_name": "Malloc1" 00:05:19.803 } 00:05:19.803 ]' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.803 /dev/nbd1' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.803 /dev/nbd1' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.803 256+0 records in 00:05:19.803 256+0 records out 00:05:19.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103159 s, 102 MB/s 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.803 256+0 records in 00:05:19.803 256+0 records out 00:05:19.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133783 s, 78.4 MB/s 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.803 256+0 records in 00:05:19.803 256+0 records out 00:05:19.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153085 s, 68.5 MB/s 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.803 17:13:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.804 17:13:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.804 17:13:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.804 17:13:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.804 17:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.804 17:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.061 17:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.319 17:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.319 17:13:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.319 17:13:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.577 17:13:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.836 [2024-07-12 17:13:39.466163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.836 [2024-07-12 17:13:39.536167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.836 [2024-07-12 17:13:39.536170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.836 [2024-07-12 17:13:39.577112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.836 [2024-07-12 17:13:39.577153] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.121 spdk_app_start Round 2 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3892436 /var/tmp/spdk-nbd.sock 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3892436 ']' 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.121 17:13:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.121 Malloc0 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.121 Malloc1 00:05:24.121 17:13:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.121 17:13:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.122 17:13:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.379 /dev/nbd0 00:05:24.380 17:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.380 17:13:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.380 1+0 records in 00:05:24.380 1+0 records out 00:05:24.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222557 s, 18.4 MB/s 00:05:24.380 17:13:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.380 17:13:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.380 17:13:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.380 17:13:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.380 17:13:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.380 17:13:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.380 17:13:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.380 17:13:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.637 /dev/nbd1 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.637 1+0 records in 00:05:24.637 1+0 records out 00:05:24.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185031 s, 22.1 MB/s 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.637 17:13:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.637 { 00:05:24.637 "nbd_device": "/dev/nbd0", 00:05:24.637 "bdev_name": "Malloc0" 00:05:24.637 }, 00:05:24.637 { 00:05:24.637 "nbd_device": "/dev/nbd1", 00:05:24.637 "bdev_name": "Malloc1" 00:05:24.637 } 00:05:24.637 ]' 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.637 { 00:05:24.637 "nbd_device": "/dev/nbd0", 00:05:24.637 "bdev_name": "Malloc0" 00:05:24.637 }, 00:05:24.637 { 00:05:24.637 "nbd_device": "/dev/nbd1", 00:05:24.637 "bdev_name": "Malloc1" 00:05:24.637 } 00:05:24.637 ]' 00:05:24.637 17:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.896 /dev/nbd1' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.896 /dev/nbd1' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.896 256+0 records in 00:05:24.896 256+0 records out 00:05:24.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102878 s, 102 MB/s 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.896 256+0 records in 00:05:24.896 256+0 records out 00:05:24.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138092 s, 75.9 MB/s 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.896 256+0 records in 00:05:24.896 256+0 records out 00:05:24.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143777 s, 72.9 MB/s 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.896 17:13:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.154 17:13:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.412 17:13:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.412 17:13:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.671 17:13:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.930 [2024-07-12 17:13:44.485804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.930 [2024-07-12 17:13:44.552721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.930 [2024-07-12 17:13:44.552734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.930 [2024-07-12 17:13:44.593722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.930 [2024-07-12 17:13:44.593764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.215 17:13:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3892436 /var/tmp/spdk-nbd.sock 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3892436 ']' 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.215 17:13:47 event.app_repeat -- event/event.sh@39 -- # killprocess 3892436 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3892436 ']' 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3892436 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3892436 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.215 17:13:47 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.216 17:13:47 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3892436' 00:05:29.216 killing process with pid 3892436 00:05:29.216 17:13:47 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3892436 00:05:29.216 17:13:47 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3892436 00:05:29.216 spdk_app_start is called in Round 0. 00:05:29.216 Shutdown signal received, stop current app iteration 00:05:29.216 Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 reinitialization... 00:05:29.216 spdk_app_start is called in Round 1. 00:05:29.216 Shutdown signal received, stop current app iteration 00:05:29.216 Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 reinitialization... 00:05:29.216 spdk_app_start is called in Round 2. 00:05:29.216 Shutdown signal received, stop current app iteration 00:05:29.216 Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 reinitialization... 00:05:29.216 spdk_app_start is called in Round 3. 00:05:29.216 Shutdown signal received, stop current app iteration 00:05:29.216 17:13:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.216 17:13:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.216 00:05:29.216 real 0m16.248s 00:05:29.216 user 0m35.233s 00:05:29.216 sys 0m2.293s 00:05:29.216 17:13:47 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.216 17:13:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.216 ************************************ 00:05:29.216 END TEST app_repeat 00:05:29.216 ************************************ 00:05:29.216 17:13:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.216 17:13:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.216 17:13:47 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.216 17:13:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.216 17:13:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.216 17:13:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.216 ************************************ 00:05:29.216 START TEST cpu_locks 00:05:29.216 ************************************ 00:05:29.216 17:13:47 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.216 * Looking for test storage... 00:05:29.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:29.216 17:13:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.216 17:13:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.216 17:13:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.216 17:13:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.216 17:13:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.216 17:13:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.216 17:13:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.216 ************************************ 00:05:29.216 START TEST default_locks 00:05:29.216 ************************************ 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3895419 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3895419 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3895419 ']' 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.216 17:13:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.216 [2024-07-12 17:13:47.898819] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:29.216 [2024-07-12 17:13:47.898860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895419 ] 00:05:29.216 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.216 [2024-07-12 17:13:47.952076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.474 [2024-07-12 17:13:48.025064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.042 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.042 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:30.042 17:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3895419 00:05:30.042 17:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3895419 00:05:30.042 17:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.301 lslocks: write error 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3895419 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3895419 ']' 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3895419 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.301 17:13:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3895419 00:05:30.301 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.301 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.301 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3895419' 00:05:30.301 killing process with pid 3895419 00:05:30.301 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3895419 00:05:30.301 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3895419 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3895419 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3895419 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3895419 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3895419 ']' 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3895419) - No such process 00:05:30.560 ERROR: process (pid: 3895419) is no longer running 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.560 00:05:30.560 real 0m1.476s 00:05:30.560 user 0m1.555s 00:05:30.560 sys 0m0.466s 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.560 17:13:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.560 ************************************ 00:05:30.560 END TEST default_locks 00:05:30.560 ************************************ 00:05:30.819 17:13:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.819 17:13:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.819 17:13:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.819 17:13:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.819 17:13:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.819 ************************************ 00:05:30.819 START TEST default_locks_via_rpc 00:05:30.819 ************************************ 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3895685 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3895685 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3895685 ']' 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.819 17:13:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.819 [2024-07-12 17:13:49.432840] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:30.820 [2024-07-12 17:13:49.432879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895685 ] 00:05:30.820 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.820 [2024-07-12 17:13:49.485084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.820 [2024-07-12 17:13:49.564502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3895685 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3895685 00:05:31.754 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3895685 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3895685 ']' 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3895685 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3895685 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3895685' 00:05:32.013 killing process with pid 3895685 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3895685 00:05:32.013 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3895685 00:05:32.273 00:05:32.273 real 0m1.545s 00:05:32.273 user 0m1.627s 00:05:32.273 sys 0m0.501s 00:05:32.273 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.273 17:13:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.273 ************************************ 00:05:32.273 END TEST default_locks_via_rpc 00:05:32.273 ************************************ 00:05:32.273 17:13:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.273 17:13:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.273 17:13:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.273 17:13:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.273 17:13:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.273 ************************************ 00:05:32.273 START TEST non_locking_app_on_locked_coremask 00:05:32.273 ************************************ 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3895952 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3895952 /var/tmp/spdk.sock 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3895952 ']' 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.273 17:13:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.273 [2024-07-12 17:13:51.029353] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:32.273 [2024-07-12 17:13:51.029396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895952 ] 00:05:32.532 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.532 [2024-07-12 17:13:51.082179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.532 [2024-07-12 17:13:51.161053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3896120 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3896120 /var/tmp/spdk2.sock 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3896120 ']' 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.099 17:13:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.358 [2024-07-12 17:13:51.903451] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:33.358 [2024-07-12 17:13:51.903499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896120 ] 00:05:33.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.358 [2024-07-12 17:13:51.979171] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.358 [2024-07-12 17:13:51.979197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.358 [2024-07-12 17:13:52.131914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.292 17:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.292 17:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.292 17:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3895952 00:05:34.292 17:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3895952 00:05:34.292 17:13:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.550 lslocks: write error 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3895952 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3895952 ']' 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3895952 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3895952 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3895952' 00:05:34.550 killing process with pid 3895952 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3895952 00:05:34.550 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3895952 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3896120 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3896120 ']' 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3896120 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896120 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896120' 00:05:35.486 killing process with pid 3896120 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3896120 00:05:35.486 17:13:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3896120 00:05:35.746 00:05:35.746 real 0m3.288s 00:05:35.746 user 0m3.549s 00:05:35.746 sys 0m0.925s 00:05:35.746 17:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.746 17:13:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.746 ************************************ 00:05:35.746 END TEST non_locking_app_on_locked_coremask 00:05:35.746 ************************************ 00:05:35.746 17:13:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.746 17:13:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:35.746 17:13:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.746 17:13:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.746 17:13:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.746 ************************************ 00:05:35.746 START TEST locking_app_on_unlocked_coremask 00:05:35.746 ************************************ 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3896462 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3896462 /var/tmp/spdk.sock 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3896462 ']' 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.746 17:13:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.746 [2024-07-12 17:13:54.398203] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:35.746 [2024-07-12 17:13:54.398249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896462 ] 00:05:35.746 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.746 [2024-07-12 17:13:54.451079] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.746 [2024-07-12 17:13:54.451103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.005 [2024-07-12 17:13:54.524587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3896684 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3896684 /var/tmp/spdk2.sock 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3896684 ']' 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.573 17:13:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.573 [2024-07-12 17:13:55.239922] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:36.573 [2024-07-12 17:13:55.239972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896684 ] 00:05:36.573 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.573 [2024-07-12 17:13:55.314587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.831 [2024-07-12 17:13:55.460998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.398 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.398 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:37.398 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3896684 00:05:37.398 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3896684 00:05:37.398 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.965 lslocks: write error 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3896462 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3896462 ']' 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3896462 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896462 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896462' 00:05:37.965 killing process with pid 3896462 00:05:37.965 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3896462 00:05:38.223 17:13:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3896462 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3896684 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3896684 ']' 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3896684 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896684 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896684' 00:05:38.789 killing process with pid 3896684 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3896684 00:05:38.789 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3896684 00:05:39.057 00:05:39.057 real 0m3.370s 00:05:39.057 user 0m3.604s 00:05:39.057 sys 0m0.960s 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 ************************************ 00:05:39.057 END TEST locking_app_on_unlocked_coremask 00:05:39.057 ************************************ 00:05:39.057 17:13:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:39.057 17:13:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.057 17:13:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.057 17:13:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.057 17:13:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 ************************************ 00:05:39.057 START TEST locking_app_on_locked_coremask 00:05:39.057 ************************************ 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3897177 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3897177 /var/tmp/spdk.sock 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3897177 ']' 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.057 17:13:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.057 [2024-07-12 17:13:57.808312] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:39.057 [2024-07-12 17:13:57.808350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897177 ] 00:05:39.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.393 [2024-07-12 17:13:57.862598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.393 [2024-07-12 17:13:57.941426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.961 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.961 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:39.961 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3897267 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3897267 /var/tmp/spdk2.sock 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3897267 /var/tmp/spdk2.sock 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3897267 /var/tmp/spdk2.sock 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3897267 ']' 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.962 17:13:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.962 [2024-07-12 17:13:58.663291] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:39.962 [2024-07-12 17:13:58.663336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897267 ] 00:05:39.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.962 [2024-07-12 17:13:58.739292] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3897177 has claimed it. 00:05:39.962 [2024-07-12 17:13:58.739326] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3897267) - No such process 00:05:40.530 ERROR: process (pid: 3897267) is no longer running 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3897177 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3897177 00:05:40.530 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.098 lslocks: write error 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3897177 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3897177 ']' 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3897177 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3897177 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3897177' 00:05:41.098 killing process with pid 3897177 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3897177 00:05:41.098 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3897177 00:05:41.357 00:05:41.357 real 0m2.186s 00:05:41.357 user 0m2.437s 00:05:41.357 sys 0m0.549s 00:05:41.357 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.357 17:13:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.357 ************************************ 00:05:41.357 END TEST locking_app_on_locked_coremask 00:05:41.357 ************************************ 00:05:41.357 17:13:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.357 17:13:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.357 17:13:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.357 17:13:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.357 17:13:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.357 ************************************ 00:05:41.357 START TEST locking_overlapped_coremask 00:05:41.357 ************************************ 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3897542 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3897542 /var/tmp/spdk.sock 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3897542 ']' 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.357 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.358 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.358 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.358 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.358 [2024-07-12 17:14:00.068011] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:41.358 [2024-07-12 17:14:00.068055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897542 ] 00:05:41.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.358 [2024-07-12 17:14:00.122060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.616 [2024-07-12 17:14:00.204190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.616 [2024-07-12 17:14:00.204285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.616 [2024-07-12 17:14:00.204285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3897685 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3897685 /var/tmp/spdk2.sock 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3897685 /var/tmp/spdk2.sock 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3897685 /var/tmp/spdk2.sock 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3897685 ']' 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.182 17:14:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.182 [2024-07-12 17:14:00.923615] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:42.183 [2024-07-12 17:14:00.923662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897685 ] 00:05:42.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.441 [2024-07-12 17:14:00.999653] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3897542 has claimed it. 00:05:42.441 [2024-07-12 17:14:00.999690] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3897685) - No such process 00:05:43.008 ERROR: process (pid: 3897685) is no longer running 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3897542 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3897542 ']' 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3897542 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3897542 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3897542' 00:05:43.008 killing process with pid 3897542 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3897542 00:05:43.008 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3897542 00:05:43.266 00:05:43.266 real 0m1.893s 00:05:43.266 user 0m5.330s 00:05:43.266 sys 0m0.407s 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 ************************************ 00:05:43.266 END TEST locking_overlapped_coremask 00:05:43.266 ************************************ 00:05:43.266 17:14:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.266 17:14:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.266 17:14:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.266 17:14:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.266 17:14:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 ************************************ 00:05:43.266 START TEST locking_overlapped_coremask_via_rpc 00:05:43.266 ************************************ 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3897946 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3897946 /var/tmp/spdk.sock 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3897946 ']' 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.266 17:14:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.266 [2024-07-12 17:14:02.010887] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:43.266 [2024-07-12 17:14:02.010927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897946 ] 00:05:43.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.525 [2024-07-12 17:14:02.067910] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.525 [2024-07-12 17:14:02.067932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.525 [2024-07-12 17:14:02.149029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.525 [2024-07-12 17:14:02.149040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.525 [2024-07-12 17:14:02.149042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3898061 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3898061 /var/tmp/spdk2.sock 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3898061 ']' 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.100 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.101 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.101 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.101 17:14:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.101 [2024-07-12 17:14:02.855039] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:44.101 [2024-07-12 17:14:02.855092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898061 ] 00:05:44.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.358 [2024-07-12 17:14:02.935965] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.358 [2024-07-12 17:14:02.935991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.358 [2024-07-12 17:14:03.090266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.358 [2024-07-12 17:14:03.090381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.358 [2024-07-12 17:14:03.090390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.925 [2024-07-12 17:14:03.673452] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3897946 has claimed it. 00:05:44.925 request: 00:05:44.925 { 00:05:44.925 "method": "framework_enable_cpumask_locks", 00:05:44.925 "req_id": 1 00:05:44.925 } 00:05:44.925 Got JSON-RPC error response 00:05:44.925 response: 00:05:44.925 { 00:05:44.925 "code": -32603, 00:05:44.925 "message": "Failed to claim CPU core: 2" 00:05:44.925 } 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3897946 /var/tmp/spdk.sock 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3897946 ']' 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.925 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3898061 /var/tmp/spdk2.sock 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3898061 ']' 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.184 17:14:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.446 00:05:45.446 real 0m2.094s 00:05:45.446 user 0m0.863s 00:05:45.446 sys 0m0.158s 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.446 17:14:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.446 ************************************ 00:05:45.446 END TEST locking_overlapped_coremask_via_rpc 00:05:45.446 ************************************ 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.446 17:14:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:45.446 17:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3897946 ]] 00:05:45.446 17:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3897946 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3897946 ']' 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3897946 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3897946 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3897946' 00:05:45.446 killing process with pid 3897946 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3897946 00:05:45.446 17:14:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3897946 00:05:45.705 17:14:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3898061 ]] 00:05:45.705 17:14:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3898061 00:05:45.705 17:14:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3898061 ']' 00:05:45.705 17:14:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3898061 00:05:45.705 17:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:45.705 17:14:04 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.705 17:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3898061 00:05:45.964 17:14:04 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:45.964 17:14:04 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:45.964 17:14:04 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3898061' 00:05:45.964 killing process with pid 3898061 00:05:45.964 17:14:04 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3898061 00:05:45.964 17:14:04 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3898061 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3897946 ]] 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3897946 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3897946 ']' 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3897946 00:05:46.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3897946) - No such process 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3897946 is not found' 00:05:46.225 Process with pid 3897946 is not found 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3898061 ]] 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3898061 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3898061 ']' 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3898061 00:05:46.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3898061) - No such process 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3898061 is not found' 00:05:46.225 Process with pid 3898061 is not found 00:05:46.225 17:14:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:46.225 00:05:46.225 real 0m17.083s 00:05:46.225 user 0m29.419s 00:05:46.225 sys 0m4.840s 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.225 17:14:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.225 ************************************ 00:05:46.225 END TEST cpu_locks 00:05:46.225 ************************************ 00:05:46.225 17:14:04 event -- common/autotest_common.sh@1142 -- # return 0 00:05:46.225 00:05:46.225 real 0m42.187s 00:05:46.225 user 1m20.513s 00:05:46.225 sys 0m8.017s 00:05:46.225 17:14:04 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.225 17:14:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.225 ************************************ 00:05:46.225 END TEST event 00:05:46.225 ************************************ 00:05:46.225 17:14:04 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.225 17:14:04 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.225 17:14:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.225 17:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.225 17:14:04 -- common/autotest_common.sh@10 -- # set +x 00:05:46.225 ************************************ 00:05:46.225 START TEST thread 00:05:46.225 ************************************ 00:05:46.225 17:14:04 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:46.225 * Looking for test storage... 00:05:46.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:46.225 17:14:05 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.484 17:14:05 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:46.484 17:14:05 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.484 17:14:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.484 ************************************ 00:05:46.484 START TEST thread_poller_perf 00:05:46.484 ************************************ 00:05:46.484 17:14:05 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:46.484 [2024-07-12 17:14:05.059023] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:46.484 [2024-07-12 17:14:05.059090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898513 ] 00:05:46.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.484 [2024-07-12 17:14:05.115845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.484 [2024-07-12 17:14:05.189073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.484 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.860 ====================================== 00:05:47.860 busy:2307320978 (cyc) 00:05:47.860 total_run_count: 408000 00:05:47.860 tsc_hz: 2300000000 (cyc) 00:05:47.860 ====================================== 00:05:47.860 poller_cost: 5655 (cyc), 2458 (nsec) 00:05:47.860 00:05:47.860 real 0m1.225s 00:05:47.860 user 0m1.145s 00:05:47.860 sys 0m0.076s 00:05:47.860 17:14:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.860 17:14:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.860 ************************************ 00:05:47.860 END TEST thread_poller_perf 00:05:47.860 ************************************ 00:05:47.860 17:14:06 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:47.860 17:14:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.860 17:14:06 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:47.860 17:14:06 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.860 17:14:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.860 ************************************ 00:05:47.860 START TEST thread_poller_perf 00:05:47.860 ************************************ 00:05:47.860 17:14:06 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.860 [2024-07-12 17:14:06.340387] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:47.860 [2024-07-12 17:14:06.340427] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898766 ] 00:05:47.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.860 [2024-07-12 17:14:06.393335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.860 [2024-07-12 17:14:06.462985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.860 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:48.796 ====================================== 00:05:48.796 busy:2301711424 (cyc) 00:05:48.796 total_run_count: 5306000 00:05:48.796 tsc_hz: 2300000000 (cyc) 00:05:48.796 ====================================== 00:05:48.796 poller_cost: 433 (cyc), 188 (nsec) 00:05:48.796 00:05:48.796 real 0m1.203s 00:05:48.796 user 0m1.138s 00:05:48.796 sys 0m0.061s 00:05:48.796 17:14:07 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.796 17:14:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 ************************************ 00:05:48.796 END TEST thread_poller_perf 00:05:48.796 ************************************ 00:05:48.796 17:14:07 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:48.796 17:14:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:48.796 00:05:48.796 real 0m2.636s 00:05:48.796 user 0m2.374s 00:05:48.796 sys 0m0.270s 00:05:48.796 17:14:07 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.796 17:14:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 ************************************ 00:05:48.796 END TEST thread 00:05:48.796 ************************************ 00:05:49.054 17:14:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.054 17:14:07 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:49.054 17:14:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.054 17:14:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.054 17:14:07 -- common/autotest_common.sh@10 -- # set +x 00:05:49.054 ************************************ 00:05:49.054 START TEST accel 00:05:49.054 ************************************ 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:49.054 * Looking for test storage... 00:05:49.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:49.054 17:14:07 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:49.054 17:14:07 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:49.054 17:14:07 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.054 17:14:07 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3899051 00:05:49.054 17:14:07 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:49.054 17:14:07 accel -- accel/accel.sh@63 -- # waitforlisten 3899051 00:05:49.054 17:14:07 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:49.054 17:14:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.054 17:14:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@829 -- # '[' -z 3899051 ']' 00:05:49.054 17:14:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.054 17:14:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.054 17:14:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.054 17:14:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:49.054 17:14:07 accel -- accel/accel.sh@41 -- # jq -r . 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.054 17:14:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.054 [2024-07-12 17:14:07.739160] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:49.054 [2024-07-12 17:14:07.739207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899051 ] 00:05:49.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.054 [2024-07-12 17:14:07.788311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.312 [2024-07-12 17:14:07.868172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@862 -- # return 0 00:05:49.878 17:14:08 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:49.878 17:14:08 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:49.878 17:14:08 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:49.878 17:14:08 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:49.878 17:14:08 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:49.878 17:14:08 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.878 17:14:08 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # IFS== 00:05:49.878 17:14:08 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:49.878 17:14:08 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:49.878 17:14:08 accel -- accel/accel.sh@75 -- # killprocess 3899051 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@948 -- # '[' -z 3899051 ']' 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@952 -- # kill -0 3899051 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@953 -- # uname 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3899051 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3899051' 00:05:49.878 killing process with pid 3899051 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@967 -- # kill 3899051 00:05:49.878 17:14:08 accel -- common/autotest_common.sh@972 -- # wait 3899051 00:05:50.445 17:14:08 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:50.445 17:14:08 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:50.445 17:14:08 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:50.445 17:14:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.445 17:14:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.445 17:14:08 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:50.445 17:14:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:50.445 17:14:08 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.445 17:14:08 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:50.445 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.445 17:14:09 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:50.445 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.445 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.445 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.445 ************************************ 00:05:50.445 START TEST accel_missing_filename 00:05:50.445 ************************************ 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.445 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:50.445 17:14:09 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:50.445 [2024-07-12 17:14:09.088658] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:50.445 [2024-07-12 17:14:09.088724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899326 ] 00:05:50.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.445 [2024-07-12 17:14:09.143807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.445 [2024-07-12 17:14:09.215327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.705 [2024-07-12 17:14:09.256187] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.705 [2024-07-12 17:14:09.315785] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:50.705 A filename is required. 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.705 00:05:50.705 real 0m0.326s 00:05:50.705 user 0m0.246s 00:05:50.705 sys 0m0.118s 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.705 17:14:09 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 END TEST accel_missing_filename 00:05:50.705 ************************************ 00:05:50.705 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.705 17:14:09 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:50.705 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:50.705 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.705 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 START TEST accel_compress_verify 00:05:50.705 ************************************ 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.705 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:50.705 17:14:09 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:50.705 [2024-07-12 17:14:09.479029] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:50.705 [2024-07-12 17:14:09.479092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899348 ] 00:05:50.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.964 [2024-07-12 17:14:09.537341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.964 [2024-07-12 17:14:09.609967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.964 [2024-07-12 17:14:09.650948] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.964 [2024-07-12 17:14:09.711067] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:51.224 00:05:51.224 Compression does not support the verify option, aborting. 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.224 00:05:51.224 real 0m0.333s 00:05:51.224 user 0m0.250s 00:05:51.224 sys 0m0.118s 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.224 17:14:09 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 ************************************ 00:05:51.224 END TEST accel_compress_verify 00:05:51.224 ************************************ 00:05:51.224 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.224 17:14:09 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:51.224 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.224 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.224 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.224 ************************************ 00:05:51.224 START TEST accel_wrong_workload 00:05:51.224 ************************************ 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.224 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:51.224 17:14:09 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:51.224 Unsupported workload type: foobar 00:05:51.224 [2024-07-12 17:14:09.861456] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:51.224 accel_perf options: 00:05:51.224 [-h help message] 00:05:51.224 [-q queue depth per core] 00:05:51.225 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.225 [-T number of threads per core 00:05:51.225 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.225 [-t time in seconds] 00:05:51.225 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.225 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.225 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.225 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.225 [-S for crc32c workload, use this seed value (default 0) 00:05:51.225 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.225 [-f for fill workload, use this BYTE value (default 255) 00:05:51.225 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.225 [-y verify result if this switch is on] 00:05:51.225 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.225 Can be used to spread operations across a wider range of memory. 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.225 00:05:51.225 real 0m0.022s 00:05:51.225 user 0m0.014s 00:05:51.225 sys 0m0.008s 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.225 17:14:09 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 END TEST accel_wrong_workload 00:05:51.225 ************************************ 00:05:51.225 Error: writing output failed: Broken pipe 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.225 17:14:09 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 START TEST accel_negative_buffers 00:05:51.225 ************************************ 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:51.225 17:14:09 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:51.225 -x option must be non-negative. 00:05:51.225 [2024-07-12 17:14:09.957641] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:51.225 accel_perf options: 00:05:51.225 [-h help message] 00:05:51.225 [-q queue depth per core] 00:05:51.225 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:51.225 [-T number of threads per core 00:05:51.225 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:51.225 [-t time in seconds] 00:05:51.225 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:51.225 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:51.225 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:51.225 [-l for compress/decompress workloads, name of uncompressed input file 00:05:51.225 [-S for crc32c workload, use this seed value (default 0) 00:05:51.225 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:51.225 [-f for fill workload, use this BYTE value (default 255) 00:05:51.225 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:51.225 [-y verify result if this switch is on] 00:05:51.225 [-a tasks to allocate per core (default: same value as -q)] 00:05:51.225 Can be used to spread operations across a wider range of memory. 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.225 00:05:51.225 real 0m0.033s 00:05:51.225 user 0m0.019s 00:05:51.225 sys 0m0.013s 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.225 17:14:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:51.225 ************************************ 00:05:51.225 END TEST accel_negative_buffers 00:05:51.225 ************************************ 00:05:51.225 Error: writing output failed: Broken pipe 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.225 17:14:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.225 17:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.484 ************************************ 00:05:51.484 START TEST accel_crc32c 00:05:51.484 ************************************ 00:05:51.484 17:14:10 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:51.484 [2024-07-12 17:14:10.051022] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:51.484 [2024-07-12 17:14:10.051072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899622 ] 00:05:51.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.484 [2024-07-12 17:14:10.105776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.484 [2024-07-12 17:14:10.177314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.484 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.485 17:14:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.859 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:52.860 17:14:11 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.860 00:05:52.860 real 0m1.332s 00:05:52.860 user 0m1.232s 00:05:52.860 sys 0m0.113s 00:05:52.860 17:14:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.860 17:14:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:52.860 ************************************ 00:05:52.860 END TEST accel_crc32c 00:05:52.860 ************************************ 00:05:52.860 17:14:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.860 17:14:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:52.860 17:14:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:52.860 17:14:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.860 17:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.860 ************************************ 00:05:52.860 START TEST accel_crc32c_C2 00:05:52.860 ************************************ 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:52.860 [2024-07-12 17:14:11.439127] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:52.860 [2024-07-12 17:14:11.439173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899876 ] 00:05:52.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.860 [2024-07-12 17:14:11.491262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.860 [2024-07-12 17:14:11.563642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.860 17:14:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.240 00:05:54.240 real 0m1.323s 00:05:54.240 user 0m1.223s 00:05:54.240 sys 0m0.112s 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.240 17:14:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:54.240 ************************************ 00:05:54.240 END TEST accel_crc32c_C2 00:05:54.240 ************************************ 00:05:54.240 17:14:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.240 17:14:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:54.240 17:14:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:54.240 17:14:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.240 17:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.240 ************************************ 00:05:54.240 START TEST accel_copy 00:05:54.240 ************************************ 00:05:54.240 17:14:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:54.240 [2024-07-12 17:14:12.819530] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:54.240 [2024-07-12 17:14:12.819576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900124 ] 00:05:54.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.240 [2024-07-12 17:14:12.873210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.240 [2024-07-12 17:14:12.944589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.240 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:54.241 17:14:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:55.621 17:14:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.621 00:05:55.621 real 0m1.331s 00:05:55.621 user 0m1.234s 00:05:55.621 sys 0m0.110s 00:05:55.621 17:14:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.621 17:14:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.621 ************************************ 00:05:55.621 END TEST accel_copy 00:05:55.621 ************************************ 00:05:55.621 17:14:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.621 17:14:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.621 17:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:55.621 17:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.621 17:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.621 ************************************ 00:05:55.621 START TEST accel_fill 00:05:55.621 ************************************ 00:05:55.621 17:14:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:55.621 [2024-07-12 17:14:14.206369] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:55.621 [2024-07-12 17:14:14.206417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900375 ] 00:05:55.621 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.621 [2024-07-12 17:14:14.258363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.621 [2024-07-12 17:14:14.330407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:55.621 17:14:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:56.998 17:14:15 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.998 00:05:56.998 real 0m1.321s 00:05:56.998 user 0m1.223s 00:05:56.998 sys 0m0.111s 00:05:56.998 17:14:15 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.998 17:14:15 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:56.998 ************************************ 00:05:56.998 END TEST accel_fill 00:05:56.998 ************************************ 00:05:56.998 17:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.998 17:14:15 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:56.998 17:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:56.998 17:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.998 17:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.998 ************************************ 00:05:56.998 START TEST accel_copy_crc32c 00:05:56.998 ************************************ 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:56.998 [2024-07-12 17:14:15.597899] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:56.998 [2024-07-12 17:14:15.597941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900624 ] 00:05:56.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.998 [2024-07-12 17:14:15.646046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.998 [2024-07-12 17:14:15.717963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.998 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:56.999 17:14:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.373 00:05:58.373 real 0m1.318s 00:05:58.373 user 0m1.223s 00:05:58.373 sys 0m0.108s 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.373 17:14:16 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:58.373 ************************************ 00:05:58.373 END TEST accel_copy_crc32c 00:05:58.373 ************************************ 00:05:58.373 17:14:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.373 17:14:16 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:58.373 17:14:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:58.374 17:14:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.374 17:14:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.374 ************************************ 00:05:58.374 START TEST accel_copy_crc32c_C2 00:05:58.374 ************************************ 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.374 17:14:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:58.374 [2024-07-12 17:14:16.991165] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:58.374 [2024-07-12 17:14:16.991232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900870 ] 00:05:58.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.374 [2024-07-12 17:14:17.046816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.374 [2024-07-12 17:14:17.119111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.632 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.632 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.632 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.632 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:58.633 17:14:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.569 00:05:59.569 real 0m1.336s 00:05:59.569 user 0m1.236s 00:05:59.569 sys 0m0.113s 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.569 17:14:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:59.569 ************************************ 00:05:59.569 END TEST accel_copy_crc32c_C2 00:05:59.569 ************************************ 00:05:59.569 17:14:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.569 17:14:18 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:59.569 17:14:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.569 17:14:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.569 17:14:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.828 ************************************ 00:05:59.828 START TEST accel_dualcast 00:05:59.828 ************************************ 00:05:59.828 17:14:18 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:59.828 [2024-07-12 17:14:18.390709] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:05:59.828 [2024-07-12 17:14:18.390763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901123 ] 00:05:59.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.828 [2024-07-12 17:14:18.445798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.828 [2024-07-12 17:14:18.516943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:59.828 17:14:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:01.272 17:14:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.272 00:06:01.272 real 0m1.331s 00:06:01.272 user 0m1.229s 00:06:01.272 sys 0m0.115s 00:06:01.272 17:14:19 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.272 17:14:19 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:01.272 ************************************ 00:06:01.272 END TEST accel_dualcast 00:06:01.272 ************************************ 00:06:01.272 17:14:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.272 17:14:19 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:01.272 17:14:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.272 17:14:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.272 17:14:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.272 ************************************ 00:06:01.272 START TEST accel_compare 00:06:01.272 ************************************ 00:06:01.272 17:14:19 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:01.272 [2024-07-12 17:14:19.789351] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:01.272 [2024-07-12 17:14:19.789425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901371 ] 00:06:01.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.272 [2024-07-12 17:14:19.845000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.272 [2024-07-12 17:14:19.916510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.272 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:01.273 17:14:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:02.654 17:14:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.654 00:06:02.654 real 0m1.337s 00:06:02.654 user 0m1.236s 00:06:02.654 sys 0m0.114s 00:06:02.654 17:14:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.654 17:14:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 ************************************ 00:06:02.654 END TEST accel_compare 00:06:02.654 ************************************ 00:06:02.654 17:14:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.654 17:14:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:02.654 17:14:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:02.654 17:14:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.654 17:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 ************************************ 00:06:02.654 START TEST accel_xor 00:06:02.654 ************************************ 00:06:02.654 17:14:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:02.654 [2024-07-12 17:14:21.177527] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:02.654 [2024-07-12 17:14:21.177566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901622 ] 00:06:02.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.654 [2024-07-12 17:14:21.230817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.654 [2024-07-12 17:14:21.302397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.654 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:02.655 17:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.031 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.032 00:06:04.032 real 0m1.319s 00:06:04.032 user 0m1.225s 00:06:04.032 sys 0m0.108s 00:06:04.032 17:14:22 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.032 17:14:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:04.032 ************************************ 00:06:04.032 END TEST accel_xor 00:06:04.032 ************************************ 00:06:04.032 17:14:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.032 17:14:22 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:04.032 17:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:04.032 17:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.032 17:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.032 ************************************ 00:06:04.032 START TEST accel_xor 00:06:04.032 ************************************ 00:06:04.032 17:14:22 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:04.032 [2024-07-12 17:14:22.562798] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:04.032 [2024-07-12 17:14:22.562834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901877 ] 00:06:04.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.032 [2024-07-12 17:14:22.615142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.032 [2024-07-12 17:14:22.686531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:04.032 17:14:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.424 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:05.425 17:14:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.425 00:06:05.425 real 0m1.318s 00:06:05.425 user 0m1.223s 00:06:05.425 sys 0m0.108s 00:06:05.425 17:14:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.425 17:14:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:05.425 ************************************ 00:06:05.425 END TEST accel_xor 00:06:05.425 ************************************ 00:06:05.425 17:14:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.425 17:14:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:05.425 17:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.425 17:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.425 17:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.425 ************************************ 00:06:05.425 START TEST accel_dif_verify 00:06:05.425 ************************************ 00:06:05.425 17:14:23 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:05.425 17:14:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:05.425 [2024-07-12 17:14:23.946643] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:05.425 [2024-07-12 17:14:23.946680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902123 ] 00:06:05.425 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.425 [2024-07-12 17:14:23.999799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.425 [2024-07-12 17:14:24.071388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:05.425 17:14:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:06.802 17:14:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.802 00:06:06.802 real 0m1.320s 00:06:06.802 user 0m1.228s 00:06:06.802 sys 0m0.108s 00:06:06.802 17:14:25 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.802 17:14:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:06.802 ************************************ 00:06:06.802 END TEST accel_dif_verify 00:06:06.802 ************************************ 00:06:06.802 17:14:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.802 17:14:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:06.802 17:14:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:06.802 17:14:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.802 17:14:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.802 ************************************ 00:06:06.802 START TEST accel_dif_generate 00:06:06.802 ************************************ 00:06:06.802 17:14:25 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:06.802 [2024-07-12 17:14:25.345723] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:06.802 [2024-07-12 17:14:25.345774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902378 ] 00:06:06.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.802 [2024-07-12 17:14:25.401771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.802 [2024-07-12 17:14:25.474236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.802 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:06.803 17:14:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.180 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.180 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.180 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.180 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.180 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:08.181 17:14:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.181 00:06:08.181 real 0m1.335s 00:06:08.181 user 0m1.236s 00:06:08.181 sys 0m0.114s 00:06:08.181 17:14:26 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.181 17:14:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:08.181 ************************************ 00:06:08.181 END TEST accel_dif_generate 00:06:08.181 ************************************ 00:06:08.181 17:14:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.181 17:14:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:08.181 17:14:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.181 17:14:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.181 17:14:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.181 ************************************ 00:06:08.181 START TEST accel_dif_generate_copy 00:06:08.181 ************************************ 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:08.181 [2024-07-12 17:14:26.739383] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:08.181 [2024-07-12 17:14:26.739432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902624 ] 00:06:08.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.181 [2024-07-12 17:14:26.793892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.181 [2024-07-12 17:14:26.865842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.181 17:14:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.558 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.559 00:06:09.559 real 0m1.328s 00:06:09.559 user 0m1.232s 00:06:09.559 sys 0m0.110s 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.559 17:14:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 ************************************ 00:06:09.559 END TEST accel_dif_generate_copy 00:06:09.559 ************************************ 00:06:09.559 17:14:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.559 17:14:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:09.559 17:14:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.559 17:14:28 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:09.559 17:14:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.559 17:14:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.559 ************************************ 00:06:09.559 START TEST accel_comp 00:06:09.559 ************************************ 00:06:09.559 17:14:28 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:09.559 [2024-07-12 17:14:28.137136] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:09.559 [2024-07-12 17:14:28.137182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902871 ] 00:06:09.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.559 [2024-07-12 17:14:28.191403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.559 [2024-07-12 17:14:28.262675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:09.559 17:14:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.937 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:10.938 17:14:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.938 00:06:10.938 real 0m1.334s 00:06:10.938 user 0m1.233s 00:06:10.938 sys 0m0.114s 00:06:10.938 17:14:29 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.938 17:14:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:10.938 ************************************ 00:06:10.938 END TEST accel_comp 00:06:10.938 ************************************ 00:06:10.938 17:14:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.938 17:14:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.938 17:14:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:10.938 17:14:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.938 17:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.938 ************************************ 00:06:10.938 START TEST accel_decomp 00:06:10.938 ************************************ 00:06:10.938 17:14:29 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:10.938 [2024-07-12 17:14:29.519920] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:10.938 [2024-07-12 17:14:29.519956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903122 ] 00:06:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.938 [2024-07-12 17:14:29.573409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.938 [2024-07-12 17:14:29.645759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:10.938 17:14:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.316 17:14:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.316 00:06:12.316 real 0m1.322s 00:06:12.316 user 0m1.232s 00:06:12.316 sys 0m0.103s 00:06:12.316 17:14:30 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.316 17:14:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:12.316 ************************************ 00:06:12.316 END TEST accel_decomp 00:06:12.316 ************************************ 00:06:12.316 17:14:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.316 17:14:30 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:12.316 17:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:12.316 17:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.316 17:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.316 ************************************ 00:06:12.316 START TEST accel_decomp_full 00:06:12.316 ************************************ 00:06:12.316 17:14:30 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:12.316 17:14:30 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:12.316 17:14:30 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:12.317 17:14:30 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:12.317 [2024-07-12 17:14:30.904350] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:12.317 [2024-07-12 17:14:30.904403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903372 ] 00:06:12.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.317 [2024-07-12 17:14:30.957618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.317 [2024-07-12 17:14:31.029095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:12.317 17:14:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:13.694 17:14:32 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.694 00:06:13.694 real 0m1.333s 00:06:13.694 user 0m1.237s 00:06:13.694 sys 0m0.111s 00:06:13.694 17:14:32 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.694 17:14:32 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:13.694 ************************************ 00:06:13.694 END TEST accel_decomp_full 00:06:13.694 ************************************ 00:06:13.694 17:14:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.694 17:14:32 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.694 17:14:32 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:13.694 17:14:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.694 17:14:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.694 ************************************ 00:06:13.694 START TEST accel_decomp_mcore 00:06:13.694 ************************************ 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:13.694 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:13.694 [2024-07-12 17:14:32.312429] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:13.694 [2024-07-12 17:14:32.312478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903617 ] 00:06:13.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.694 [2024-07-12 17:14:32.367944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.694 [2024-07-12 17:14:32.444271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.694 [2024-07-12 17:14:32.444367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.694 [2024-07-12 17:14:32.444455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.694 [2024-07-12 17:14:32.444458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.954 17:14:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.890 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.890 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.890 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.891 00:06:14.891 real 0m1.348s 00:06:14.891 user 0m4.559s 00:06:14.891 sys 0m0.128s 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.891 17:14:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:14.891 ************************************ 00:06:14.891 END TEST accel_decomp_mcore 00:06:14.891 ************************************ 00:06:14.891 17:14:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.891 17:14:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:14.891 17:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:14.891 17:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.891 17:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.150 ************************************ 00:06:15.150 START TEST accel_decomp_full_mcore 00:06:15.150 ************************************ 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:15.150 [2024-07-12 17:14:33.715567] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:15.150 [2024-07-12 17:14:33.715615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903877 ] 00:06:15.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.150 [2024-07-12 17:14:33.769180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.150 [2024-07-12 17:14:33.843278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.150 [2024-07-12 17:14:33.843372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.150 [2024-07-12 17:14:33.843480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.150 [2024-07-12 17:14:33.843482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.150 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:15.151 17:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.528 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.528 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.528 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.528 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.529 00:06:16.529 real 0m1.347s 00:06:16.529 user 0m4.605s 00:06:16.529 sys 0m0.113s 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.529 17:14:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:16.529 ************************************ 00:06:16.529 END TEST accel_decomp_full_mcore 00:06:16.529 ************************************ 00:06:16.529 17:14:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.529 17:14:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.529 17:14:35 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:16.529 17:14:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.529 17:14:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.529 ************************************ 00:06:16.529 START TEST accel_decomp_mthread 00:06:16.529 ************************************ 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:16.529 [2024-07-12 17:14:35.120221] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:16.529 [2024-07-12 17:14:35.120258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904128 ] 00:06:16.529 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.529 [2024-07-12 17:14:35.173095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.529 [2024-07-12 17:14:35.244387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.529 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.530 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.530 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.530 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.530 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.530 17:14:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.907 00:06:17.907 real 0m1.321s 00:06:17.907 user 0m1.231s 00:06:17.907 sys 0m0.106s 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.907 17:14:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:17.907 ************************************ 00:06:17.907 END TEST accel_decomp_mthread 00:06:17.907 ************************************ 00:06:17.907 17:14:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.907 17:14:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.907 17:14:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:17.907 17:14:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.907 17:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.907 ************************************ 00:06:17.907 START TEST accel_decomp_full_mthread 00:06:17.907 ************************************ 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:17.907 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:17.907 [2024-07-12 17:14:36.509504] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:17.907 [2024-07-12 17:14:36.509555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904376 ] 00:06:17.908 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.908 [2024-07-12 17:14:36.563251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.908 [2024-07-12 17:14:36.634877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.908 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:18.167 17:14:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.104 00:06:19.104 real 0m1.362s 00:06:19.104 user 0m1.261s 00:06:19.104 sys 0m0.114s 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.104 17:14:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:19.104 ************************************ 00:06:19.104 END TEST accel_decomp_full_mthread 00:06:19.104 ************************************ 00:06:19.104 17:14:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.104 17:14:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:19.104 17:14:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:19.104 17:14:37 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:19.104 17:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:19.104 17:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.104 17:14:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.104 17:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.104 17:14:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.104 17:14:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.104 17:14:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.104 17:14:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.363 17:14:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:19.363 17:14:37 accel -- accel/accel.sh@41 -- # jq -r . 00:06:19.363 ************************************ 00:06:19.363 START TEST accel_dif_functional_tests 00:06:19.363 ************************************ 00:06:19.363 17:14:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:19.363 [2024-07-12 17:14:37.935755] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:19.363 [2024-07-12 17:14:37.935790] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904628 ] 00:06:19.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.363 [2024-07-12 17:14:37.988236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.363 [2024-07-12 17:14:38.061675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.363 [2024-07-12 17:14:38.061771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.363 [2024-07-12 17:14:38.061772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.363 00:06:19.363 00:06:19.363 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.363 http://cunit.sourceforge.net/ 00:06:19.363 00:06:19.364 00:06:19.364 Suite: accel_dif 00:06:19.364 Test: verify: DIF generated, GUARD check ...passed 00:06:19.364 Test: verify: DIF generated, APPTAG check ...passed 00:06:19.364 Test: verify: DIF generated, REFTAG check ...passed 00:06:19.364 Test: verify: DIF not generated, GUARD check ...[2024-07-12 17:14:38.130198] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:19.364 passed 00:06:19.364 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 17:14:38.130244] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:19.364 passed 00:06:19.364 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 17:14:38.130280] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:19.364 passed 00:06:19.364 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:19.364 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 17:14:38.130321] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:19.364 passed 00:06:19.364 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:19.364 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:19.364 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:19.364 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 17:14:38.130422] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:19.364 passed 00:06:19.364 Test: verify copy: DIF generated, GUARD check ...passed 00:06:19.364 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:19.364 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:19.364 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 17:14:38.130529] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:19.364 passed 00:06:19.364 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 17:14:38.130551] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:19.364 passed 00:06:19.364 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 17:14:38.130570] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:19.364 passed 00:06:19.364 Test: generate copy: DIF generated, GUARD check ...passed 00:06:19.364 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:19.364 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:19.364 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:19.364 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:19.364 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:19.364 Test: generate copy: iovecs-len validate ...[2024-07-12 17:14:38.130736] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:19.364 passed 00:06:19.364 Test: generate copy: buffer alignment validate ...passed 00:06:19.364 00:06:19.364 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.364 suites 1 1 n/a 0 0 00:06:19.364 tests 26 26 26 0 0 00:06:19.364 asserts 115 115 115 0 n/a 00:06:19.364 00:06:19.364 Elapsed time = 0.002 seconds 00:06:19.623 00:06:19.623 real 0m0.391s 00:06:19.623 user 0m0.613s 00:06:19.623 sys 0m0.133s 00:06:19.623 17:14:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.623 17:14:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:19.623 ************************************ 00:06:19.623 END TEST accel_dif_functional_tests 00:06:19.623 ************************************ 00:06:19.623 17:14:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.623 00:06:19.623 real 0m30.707s 00:06:19.623 user 0m34.705s 00:06:19.623 sys 0m4.063s 00:06:19.623 17:14:38 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.623 17:14:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.623 ************************************ 00:06:19.623 END TEST accel 00:06:19.623 ************************************ 00:06:19.623 17:14:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.623 17:14:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:19.623 17:14:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.623 17:14:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.623 17:14:38 -- common/autotest_common.sh@10 -- # set +x 00:06:19.882 ************************************ 00:06:19.882 START TEST accel_rpc 00:06:19.882 ************************************ 00:06:19.882 17:14:38 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:19.882 * Looking for test storage... 00:06:19.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:19.882 17:14:38 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.883 17:14:38 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3904756 00:06:19.883 17:14:38 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3904756 00:06:19.883 17:14:38 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3904756 ']' 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.883 17:14:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.883 [2024-07-12 17:14:38.536275] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:19.883 [2024-07-12 17:14:38.536327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904756 ] 00:06:19.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.883 [2024-07-12 17:14:38.590123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.141 [2024-07-12 17:14:38.671082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.709 17:14:39 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.709 17:14:39 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.709 17:14:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:20.709 17:14:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:20.709 17:14:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:20.709 17:14:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:20.709 17:14:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:20.709 17:14:39 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.709 17:14:39 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.709 17:14:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.709 ************************************ 00:06:20.709 START TEST accel_assign_opcode 00:06:20.709 ************************************ 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.709 [2024-07-12 17:14:39.361138] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.709 [2024-07-12 17:14:39.369148] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.709 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.967 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.968 software 00:06:20.968 00:06:20.968 real 0m0.231s 00:06:20.968 user 0m0.045s 00:06:20.968 sys 0m0.009s 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.968 17:14:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.968 ************************************ 00:06:20.968 END TEST accel_assign_opcode 00:06:20.968 ************************************ 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.968 17:14:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3904756 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3904756 ']' 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3904756 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3904756 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3904756' 00:06:20.968 killing process with pid 3904756 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@967 -- # kill 3904756 00:06:20.968 17:14:39 accel_rpc -- common/autotest_common.sh@972 -- # wait 3904756 00:06:21.225 00:06:21.225 real 0m1.572s 00:06:21.225 user 0m1.637s 00:06:21.225 sys 0m0.420s 00:06:21.225 17:14:39 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.225 17:14:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.225 ************************************ 00:06:21.225 END TEST accel_rpc 00:06:21.225 ************************************ 00:06:21.484 17:14:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.484 17:14:40 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.484 17:14:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.484 17:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.484 17:14:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.484 ************************************ 00:06:21.484 START TEST app_cmdline 00:06:21.484 ************************************ 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.484 * Looking for test storage... 00:06:21.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.484 17:14:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.484 17:14:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3905216 00:06:21.484 17:14:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3905216 00:06:21.484 17:14:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3905216 ']' 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.484 17:14:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.484 [2024-07-12 17:14:40.176188] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:21.484 [2024-07-12 17:14:40.176239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905216 ] 00:06:21.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.484 [2024-07-12 17:14:40.228923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.743 [2024-07-12 17:14:40.309726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.310 17:14:40 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.310 17:14:40 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:22.310 17:14:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.568 { 00:06:22.568 "version": "SPDK v24.09-pre git sha1 a0b7842f9", 00:06:22.568 "fields": { 00:06:22.568 "major": 24, 00:06:22.568 "minor": 9, 00:06:22.568 "patch": 0, 00:06:22.568 "suffix": "-pre", 00:06:22.568 "commit": "a0b7842f9" 00:06:22.568 } 00:06:22.568 } 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.568 request: 00:06:22.568 { 00:06:22.568 "method": "env_dpdk_get_mem_stats", 00:06:22.568 "req_id": 1 00:06:22.568 } 00:06:22.568 Got JSON-RPC error response 00:06:22.568 response: 00:06:22.568 { 00:06:22.568 "code": -32601, 00:06:22.568 "message": "Method not found" 00:06:22.568 } 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.568 17:14:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.568 17:14:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3905216 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3905216 ']' 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3905216 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3905216 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3905216' 00:06:22.827 killing process with pid 3905216 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@967 -- # kill 3905216 00:06:22.827 17:14:41 app_cmdline -- common/autotest_common.sh@972 -- # wait 3905216 00:06:23.097 00:06:23.097 real 0m1.653s 00:06:23.097 user 0m1.967s 00:06:23.097 sys 0m0.418s 00:06:23.097 17:14:41 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.097 17:14:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.097 ************************************ 00:06:23.097 END TEST app_cmdline 00:06:23.097 ************************************ 00:06:23.097 17:14:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.097 17:14:41 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.097 17:14:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.097 17:14:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.097 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:06:23.097 ************************************ 00:06:23.097 START TEST version 00:06:23.097 ************************************ 00:06:23.097 17:14:41 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.097 * Looking for test storage... 00:06:23.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.097 17:14:41 version -- app/version.sh@17 -- # get_header_version major 00:06:23.097 17:14:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # cut -f2 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.097 17:14:41 version -- app/version.sh@17 -- # major=24 00:06:23.097 17:14:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.097 17:14:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # cut -f2 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.097 17:14:41 version -- app/version.sh@18 -- # minor=9 00:06:23.097 17:14:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.097 17:14:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # cut -f2 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.097 17:14:41 version -- app/version.sh@19 -- # patch=0 00:06:23.097 17:14:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.097 17:14:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # cut -f2 00:06:23.097 17:14:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.393 17:14:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.393 17:14:41 version -- app/version.sh@22 -- # version=24.9 00:06:23.393 17:14:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.393 17:14:41 version -- app/version.sh@28 -- # version=24.9rc0 00:06:23.393 17:14:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.393 17:14:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.393 17:14:41 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:23.393 17:14:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:23.393 00:06:23.393 real 0m0.148s 00:06:23.393 user 0m0.071s 00:06:23.393 sys 0m0.111s 00:06:23.393 17:14:41 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.393 17:14:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.393 ************************************ 00:06:23.393 END TEST version 00:06:23.393 ************************************ 00:06:23.393 17:14:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.393 17:14:41 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@198 -- # uname -s 00:06:23.393 17:14:41 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:23.393 17:14:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.393 17:14:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:23.393 17:14:41 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:23.393 17:14:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:23.393 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:06:23.393 17:14:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:23.393 17:14:41 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:23.393 17:14:41 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.393 17:14:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:23.393 17:14:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.393 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:06:23.393 ************************************ 00:06:23.393 START TEST nvmf_tcp 00:06:23.393 ************************************ 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.393 * Looking for test storage... 00:06:23.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.393 17:14:42 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.393 17:14:42 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.393 17:14:42 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.393 17:14:42 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.393 17:14:42 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.393 17:14:42 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.393 17:14:42 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:23.393 17:14:42 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:23.393 17:14:42 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.393 17:14:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.678 ************************************ 00:06:23.678 START TEST nvmf_example 00:06:23.678 ************************************ 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.678 * Looking for test storage... 00:06:23.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:23.678 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:23.679 17:14:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:28.946 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.946 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:28.947 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:28.947 Found net devices under 0000:86:00.0: cvl_0_0 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:28.947 Found net devices under 0000:86:00.1: cvl_0_1 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:06:28.947 00:06:28.947 --- 10.0.0.2 ping statistics --- 00:06:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.947 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:06:28.947 00:06:28.947 --- 10.0.0.1 ping statistics --- 00:06:28.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.947 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3908613 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3908613 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3908613 ']' 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.947 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.948 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.948 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.948 17:14:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.880 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:29.881 17:14:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:29.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.085 Initializing NVMe Controllers 00:06:42.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:42.085 Initialization complete. Launching workers. 00:06:42.085 ======================================================== 00:06:42.085 Latency(us) 00:06:42.085 Device Information : IOPS MiB/s Average min max 00:06:42.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18081.05 70.63 3539.29 706.64 16249.02 00:06:42.085 ======================================================== 00:06:42.085 Total : 18081.05 70.63 3539.29 706.64 16249.02 00:06:42.085 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.085 rmmod nvme_tcp 00:06:42.085 rmmod nvme_fabrics 00:06:42.085 rmmod nvme_keyring 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3908613 ']' 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3908613 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3908613 ']' 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3908613 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3908613 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3908613' 00:06:42.085 killing process with pid 3908613 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3908613 00:06:42.085 17:14:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3908613 00:06:42.085 nvmf threads initialize successfully 00:06:42.085 bdev subsystem init successfully 00:06:42.085 created a nvmf target service 00:06:42.085 create targets's poll groups done 00:06:42.085 all subsystems of target started 00:06:42.085 nvmf target is running 00:06:42.085 all subsystems of target stopped 00:06:42.085 destroy targets's poll groups done 00:06:42.085 destroyed the nvmf target service 00:06:42.085 bdev subsystem finish successfully 00:06:42.085 nvmf threads destroy successfully 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.085 17:14:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.655 00:06:42.655 real 0m19.067s 00:06:42.655 user 0m45.997s 00:06:42.655 sys 0m5.468s 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.655 17:15:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:42.655 ************************************ 00:06:42.655 END TEST nvmf_example 00:06:42.655 ************************************ 00:06:42.655 17:15:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:42.655 17:15:01 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.655 17:15:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.655 17:15:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.655 17:15:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.655 ************************************ 00:06:42.655 START TEST nvmf_filesystem 00:06:42.655 ************************************ 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:42.655 * Looking for test storage... 00:06:42.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:42.655 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:42.656 #define SPDK_CONFIG_H 00:06:42.656 #define SPDK_CONFIG_APPS 1 00:06:42.656 #define SPDK_CONFIG_ARCH native 00:06:42.656 #undef SPDK_CONFIG_ASAN 00:06:42.656 #undef SPDK_CONFIG_AVAHI 00:06:42.656 #undef SPDK_CONFIG_CET 00:06:42.656 #define SPDK_CONFIG_COVERAGE 1 00:06:42.656 #define SPDK_CONFIG_CROSS_PREFIX 00:06:42.656 #undef SPDK_CONFIG_CRYPTO 00:06:42.656 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:42.656 #undef SPDK_CONFIG_CUSTOMOCF 00:06:42.656 #undef SPDK_CONFIG_DAOS 00:06:42.656 #define SPDK_CONFIG_DAOS_DIR 00:06:42.656 #define SPDK_CONFIG_DEBUG 1 00:06:42.656 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:42.656 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:42.656 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:42.656 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:42.656 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:42.656 #undef SPDK_CONFIG_DPDK_UADK 00:06:42.656 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:42.656 #define SPDK_CONFIG_EXAMPLES 1 00:06:42.656 #undef SPDK_CONFIG_FC 00:06:42.656 #define SPDK_CONFIG_FC_PATH 00:06:42.656 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:42.656 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:42.656 #undef SPDK_CONFIG_FUSE 00:06:42.656 #undef SPDK_CONFIG_FUZZER 00:06:42.656 #define SPDK_CONFIG_FUZZER_LIB 00:06:42.656 #undef SPDK_CONFIG_GOLANG 00:06:42.656 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:42.656 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:42.656 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:42.656 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:42.656 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:42.656 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:42.656 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:42.656 #define SPDK_CONFIG_IDXD 1 00:06:42.656 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:42.656 #undef SPDK_CONFIG_IPSEC_MB 00:06:42.656 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:42.656 #define SPDK_CONFIG_ISAL 1 00:06:42.656 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:42.656 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:42.656 #define SPDK_CONFIG_LIBDIR 00:06:42.656 #undef SPDK_CONFIG_LTO 00:06:42.656 #define SPDK_CONFIG_MAX_LCORES 128 00:06:42.656 #define SPDK_CONFIG_NVME_CUSE 1 00:06:42.656 #undef SPDK_CONFIG_OCF 00:06:42.656 #define SPDK_CONFIG_OCF_PATH 00:06:42.656 #define SPDK_CONFIG_OPENSSL_PATH 00:06:42.656 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:42.656 #define SPDK_CONFIG_PGO_DIR 00:06:42.656 #undef SPDK_CONFIG_PGO_USE 00:06:42.656 #define SPDK_CONFIG_PREFIX /usr/local 00:06:42.656 #undef SPDK_CONFIG_RAID5F 00:06:42.656 #undef SPDK_CONFIG_RBD 00:06:42.656 #define SPDK_CONFIG_RDMA 1 00:06:42.656 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:42.656 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:42.656 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:42.656 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:42.656 #define SPDK_CONFIG_SHARED 1 00:06:42.656 #undef SPDK_CONFIG_SMA 00:06:42.656 #define SPDK_CONFIG_TESTS 1 00:06:42.656 #undef SPDK_CONFIG_TSAN 00:06:42.656 #define SPDK_CONFIG_UBLK 1 00:06:42.656 #define SPDK_CONFIG_UBSAN 1 00:06:42.656 #undef SPDK_CONFIG_UNIT_TESTS 00:06:42.656 #undef SPDK_CONFIG_URING 00:06:42.656 #define SPDK_CONFIG_URING_PATH 00:06:42.656 #undef SPDK_CONFIG_URING_ZNS 00:06:42.656 #undef SPDK_CONFIG_USDT 00:06:42.656 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:42.656 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:42.656 #define SPDK_CONFIG_VFIO_USER 1 00:06:42.656 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:42.656 #define SPDK_CONFIG_VHOST 1 00:06:42.656 #define SPDK_CONFIG_VIRTIO 1 00:06:42.656 #undef SPDK_CONFIG_VTUNE 00:06:42.656 #define SPDK_CONFIG_VTUNE_DIR 00:06:42.656 #define SPDK_CONFIG_WERROR 1 00:06:42.656 #define SPDK_CONFIG_WPDK_DIR 00:06:42.656 #undef SPDK_CONFIG_XNVME 00:06:42.656 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.656 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.657 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:42.918 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:42.919 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3911104 ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3911104 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.qwJmvx 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.qwJmvx/tests/target /tmp/spdk.qwJmvx 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189731176448 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974303744 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6243127296 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931513856 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986695168 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987153920 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=458752 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:42.920 * Looking for test storage... 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189731176448 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8457719808 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.920 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.921 17:15:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:48.192 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:48.192 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:48.192 Found net devices under 0000:86:00.0: cvl_0_0 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.192 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:48.193 Found net devices under 0000:86:00.1: cvl_0_1 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:06:48.193 00:06:48.193 --- 10.0.0.2 ping statistics --- 00:06:48.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.193 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:06:48.193 00:06:48.193 --- 10.0.0.1 ping statistics --- 00:06:48.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.193 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 ************************************ 00:06:48.193 START TEST nvmf_filesystem_no_in_capsule 00:06:48.193 ************************************ 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3914173 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3914173 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3914173 ']' 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.193 17:15:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.193 [2024-07-12 17:15:06.908341] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:06:48.193 [2024-07-12 17:15:06.908392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.193 [2024-07-12 17:15:06.965972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.452 [2024-07-12 17:15:07.048575] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.452 [2024-07-12 17:15:07.048611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.452 [2024-07-12 17:15:07.048618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.452 [2024-07-12 17:15:07.048625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.452 [2024-07-12 17:15:07.048630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.452 [2024-07-12 17:15:07.048672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.452 [2024-07-12 17:15:07.048766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.452 [2024-07-12 17:15:07.048852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.452 [2024-07-12 17:15:07.048853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.019 [2024-07-12 17:15:07.766359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.019 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 Malloc1 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 [2024-07-12 17:15:07.910340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:49.279 { 00:06:49.279 "name": "Malloc1", 00:06:49.279 "aliases": [ 00:06:49.279 "cd08a886-b6bc-4448-b118-41627e222854" 00:06:49.279 ], 00:06:49.279 "product_name": "Malloc disk", 00:06:49.279 "block_size": 512, 00:06:49.279 "num_blocks": 1048576, 00:06:49.279 "uuid": "cd08a886-b6bc-4448-b118-41627e222854", 00:06:49.279 "assigned_rate_limits": { 00:06:49.279 "rw_ios_per_sec": 0, 00:06:49.279 "rw_mbytes_per_sec": 0, 00:06:49.279 "r_mbytes_per_sec": 0, 00:06:49.279 "w_mbytes_per_sec": 0 00:06:49.279 }, 00:06:49.279 "claimed": true, 00:06:49.279 "claim_type": "exclusive_write", 00:06:49.279 "zoned": false, 00:06:49.279 "supported_io_types": { 00:06:49.279 "read": true, 00:06:49.279 "write": true, 00:06:49.279 "unmap": true, 00:06:49.279 "flush": true, 00:06:49.279 "reset": true, 00:06:49.279 "nvme_admin": false, 00:06:49.279 "nvme_io": false, 00:06:49.279 "nvme_io_md": false, 00:06:49.279 "write_zeroes": true, 00:06:49.279 "zcopy": true, 00:06:49.279 "get_zone_info": false, 00:06:49.279 "zone_management": false, 00:06:49.279 "zone_append": false, 00:06:49.279 "compare": false, 00:06:49.279 "compare_and_write": false, 00:06:49.279 "abort": true, 00:06:49.279 "seek_hole": false, 00:06:49.279 "seek_data": false, 00:06:49.279 "copy": true, 00:06:49.279 "nvme_iov_md": false 00:06:49.279 }, 00:06:49.279 "memory_domains": [ 00:06:49.279 { 00:06:49.279 "dma_device_id": "system", 00:06:49.279 "dma_device_type": 1 00:06:49.279 }, 00:06:49.279 { 00:06:49.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.279 "dma_device_type": 2 00:06:49.279 } 00:06:49.279 ], 00:06:49.279 "driver_specific": {} 00:06:49.279 } 00:06:49.279 ]' 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:49.279 17:15:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:49.279 17:15:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:49.279 17:15:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:49.279 17:15:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:49.279 17:15:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:49.279 17:15:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.655 17:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:50.655 17:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:50.655 17:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:50.655 17:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:50.655 17:15:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:52.557 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:52.816 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:53.075 17:15:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.451 ************************************ 00:06:54.451 START TEST filesystem_ext4 00:06:54.451 ************************************ 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:54.451 17:15:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.451 mke2fs 1.46.5 (30-Dec-2021) 00:06:54.451 Discarding device blocks: 0/522240 done 00:06:54.451 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.451 Filesystem UUID: 315e1f4e-7818-47d0-a08b-8940ec81a261 00:06:54.451 Superblock backups stored on blocks: 00:06:54.451 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.451 00:06:54.451 Allocating group tables: 0/64 done 00:06:54.451 Writing inode tables: 0/64 done 00:06:56.981 Creating journal (8192 blocks): done 00:06:56.981 Writing superblocks and filesystem accounting information: 0/64 done 00:06:56.981 00:06:56.981 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:56.981 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3914173 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.238 00:06:57.238 real 0m3.064s 00:06:57.238 user 0m0.026s 00:06:57.238 sys 0m0.067s 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 ************************************ 00:06:57.238 END TEST filesystem_ext4 00:06:57.238 ************************************ 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:57.238 ************************************ 00:06:57.238 START TEST filesystem_btrfs 00:06:57.238 ************************************ 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:57.238 17:15:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:57.238 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:57.238 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:57.238 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:57.238 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:57.496 btrfs-progs v6.6.2 00:06:57.496 See https://btrfs.readthedocs.io for more information. 00:06:57.496 00:06:57.496 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:57.496 NOTE: several default settings have changed in version 5.15, please make sure 00:06:57.496 this does not affect your deployments: 00:06:57.496 - DUP for metadata (-m dup) 00:06:57.496 - enabled no-holes (-O no-holes) 00:06:57.496 - enabled free-space-tree (-R free-space-tree) 00:06:57.496 00:06:57.496 Label: (null) 00:06:57.496 UUID: af463597-ed6c-4ce8-bc8a-94f51f7612e6 00:06:57.496 Node size: 16384 00:06:57.496 Sector size: 4096 00:06:57.496 Filesystem size: 510.00MiB 00:06:57.496 Block group profiles: 00:06:57.496 Data: single 8.00MiB 00:06:57.496 Metadata: DUP 32.00MiB 00:06:57.496 System: DUP 8.00MiB 00:06:57.496 SSD detected: yes 00:06:57.496 Zoned device: no 00:06:57.496 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:57.496 Runtime features: free-space-tree 00:06:57.496 Checksum: crc32c 00:06:57.496 Number of devices: 1 00:06:57.496 Devices: 00:06:57.496 ID SIZE PATH 00:06:57.496 1 510.00MiB /dev/nvme0n1p1 00:06:57.496 00:06:57.496 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:57.496 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3914173 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:57.753 00:06:57.753 real 0m0.486s 00:06:57.753 user 0m0.028s 00:06:57.753 sys 0m0.117s 00:06:57.753 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:57.754 ************************************ 00:06:57.754 END TEST filesystem_btrfs 00:06:57.754 ************************************ 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.754 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.012 ************************************ 00:06:58.012 START TEST filesystem_xfs 00:06:58.012 ************************************ 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:58.012 17:15:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:58.012 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:58.012 = sectsz=512 attr=2, projid32bit=1 00:06:58.012 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:58.012 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:58.012 data = bsize=4096 blocks=130560, imaxpct=25 00:06:58.012 = sunit=0 swidth=0 blks 00:06:58.012 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:58.012 log =internal log bsize=4096 blocks=16384, version=2 00:06:58.012 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:58.012 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:58.578 Discarding blocks...Done. 00:06:58.578 17:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:58.578 17:15:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3914173 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:01.157 00:07:01.157 real 0m3.363s 00:07:01.157 user 0m0.028s 00:07:01.157 sys 0m0.066s 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.157 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:01.157 ************************************ 00:07:01.157 END TEST filesystem_xfs 00:07:01.157 ************************************ 00:07:01.416 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:01.416 17:15:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3914173 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3914173 ']' 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3914173 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.416 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3914173 00:07:01.674 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.674 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.674 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3914173' 00:07:01.674 killing process with pid 3914173 00:07:01.674 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3914173 00:07:01.674 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3914173 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:01.933 00:07:01.933 real 0m13.700s 00:07:01.933 user 0m53.939s 00:07:01.933 sys 0m1.197s 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.933 ************************************ 00:07:01.933 END TEST nvmf_filesystem_no_in_capsule 00:07:01.933 ************************************ 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.933 ************************************ 00:07:01.933 START TEST nvmf_filesystem_in_capsule 00:07:01.933 ************************************ 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3917098 00:07:01.933 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3917098 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3917098 ']' 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.934 17:15:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:01.934 [2024-07-12 17:15:20.691331] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:07:01.934 [2024-07-12 17:15:20.691369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.193 [2024-07-12 17:15:20.747735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.193 [2024-07-12 17:15:20.828144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.193 [2024-07-12 17:15:20.828180] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.193 [2024-07-12 17:15:20.828187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.193 [2024-07-12 17:15:20.828194] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.193 [2024-07-12 17:15:20.828199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.193 [2024-07-12 17:15:20.828233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.193 [2024-07-12 17:15:20.828327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.193 [2024-07-12 17:15:20.828421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.193 [2024-07-12 17:15:20.828422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.759 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.759 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:02.759 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.760 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.760 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 [2024-07-12 17:15:21.554532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 [2024-07-12 17:15:21.702013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.018 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:03.018 { 00:07:03.018 "name": "Malloc1", 00:07:03.018 "aliases": [ 00:07:03.018 "f94bd4be-0200-48da-8c40-585124036ea3" 00:07:03.018 ], 00:07:03.018 "product_name": "Malloc disk", 00:07:03.018 "block_size": 512, 00:07:03.018 "num_blocks": 1048576, 00:07:03.018 "uuid": "f94bd4be-0200-48da-8c40-585124036ea3", 00:07:03.018 "assigned_rate_limits": { 00:07:03.018 "rw_ios_per_sec": 0, 00:07:03.018 "rw_mbytes_per_sec": 0, 00:07:03.018 "r_mbytes_per_sec": 0, 00:07:03.018 "w_mbytes_per_sec": 0 00:07:03.018 }, 00:07:03.018 "claimed": true, 00:07:03.018 "claim_type": "exclusive_write", 00:07:03.018 "zoned": false, 00:07:03.018 "supported_io_types": { 00:07:03.018 "read": true, 00:07:03.018 "write": true, 00:07:03.018 "unmap": true, 00:07:03.018 "flush": true, 00:07:03.018 "reset": true, 00:07:03.018 "nvme_admin": false, 00:07:03.018 "nvme_io": false, 00:07:03.018 "nvme_io_md": false, 00:07:03.018 "write_zeroes": true, 00:07:03.018 "zcopy": true, 00:07:03.018 "get_zone_info": false, 00:07:03.018 "zone_management": false, 00:07:03.018 "zone_append": false, 00:07:03.019 "compare": false, 00:07:03.019 "compare_and_write": false, 00:07:03.019 "abort": true, 00:07:03.019 "seek_hole": false, 00:07:03.019 "seek_data": false, 00:07:03.019 "copy": true, 00:07:03.019 "nvme_iov_md": false 00:07:03.019 }, 00:07:03.019 "memory_domains": [ 00:07:03.019 { 00:07:03.019 "dma_device_id": "system", 00:07:03.019 "dma_device_type": 1 00:07:03.019 }, 00:07:03.019 { 00:07:03.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.019 "dma_device_type": 2 00:07:03.019 } 00:07:03.019 ], 00:07:03.019 "driver_specific": {} 00:07:03.019 } 00:07:03.019 ]' 00:07:03.019 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:03.019 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:03.019 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:03.277 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:03.277 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:03.277 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:03.277 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:03.277 17:15:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:04.651 17:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:04.651 17:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:04.651 17:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:04.651 17:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:04.651 17:15:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:06.554 17:15:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:06.554 17:15:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:06.554 17:15:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:06.554 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:06.555 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:06.813 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:07.380 17:15:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.316 ************************************ 00:07:08.316 START TEST filesystem_in_capsule_ext4 00:07:08.316 ************************************ 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:08.316 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:08.317 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:08.317 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:08.317 17:15:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:08.317 mke2fs 1.46.5 (30-Dec-2021) 00:07:08.317 Discarding device blocks: 0/522240 done 00:07:08.317 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:08.317 Filesystem UUID: 6e9a4ca3-266f-4d5f-a560-bd2b21a9f0f2 00:07:08.317 Superblock backups stored on blocks: 00:07:08.317 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:08.317 00:07:08.317 Allocating group tables: 0/64 done 00:07:08.317 Writing inode tables: 0/64 done 00:07:09.252 Creating journal (8192 blocks): done 00:07:09.820 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:07:09.820 00:07:09.820 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:09.820 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3917098 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.078 00:07:10.078 real 0m1.957s 00:07:10.078 user 0m0.029s 00:07:10.078 sys 0m0.061s 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.078 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:10.078 ************************************ 00:07:10.078 END TEST filesystem_in_capsule_ext4 00:07:10.078 ************************************ 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.337 ************************************ 00:07:10.337 START TEST filesystem_in_capsule_btrfs 00:07:10.337 ************************************ 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:10.337 17:15:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:10.596 btrfs-progs v6.6.2 00:07:10.596 See https://btrfs.readthedocs.io for more information. 00:07:10.596 00:07:10.596 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:10.596 NOTE: several default settings have changed in version 5.15, please make sure 00:07:10.596 this does not affect your deployments: 00:07:10.596 - DUP for metadata (-m dup) 00:07:10.596 - enabled no-holes (-O no-holes) 00:07:10.596 - enabled free-space-tree (-R free-space-tree) 00:07:10.596 00:07:10.596 Label: (null) 00:07:10.596 UUID: 1d7ac920-7b9b-4fa2-aba7-79554c9b3cfe 00:07:10.596 Node size: 16384 00:07:10.596 Sector size: 4096 00:07:10.596 Filesystem size: 510.00MiB 00:07:10.596 Block group profiles: 00:07:10.596 Data: single 8.00MiB 00:07:10.596 Metadata: DUP 32.00MiB 00:07:10.596 System: DUP 8.00MiB 00:07:10.596 SSD detected: yes 00:07:10.596 Zoned device: no 00:07:10.596 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:10.596 Runtime features: free-space-tree 00:07:10.596 Checksum: crc32c 00:07:10.596 Number of devices: 1 00:07:10.596 Devices: 00:07:10.596 ID SIZE PATH 00:07:10.596 1 510.00MiB /dev/nvme0n1p1 00:07:10.596 00:07:10.596 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:10.596 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3917098 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.164 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.165 00:07:11.165 real 0m0.849s 00:07:11.165 user 0m0.026s 00:07:11.165 sys 0m0.124s 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:11.165 ************************************ 00:07:11.165 END TEST filesystem_in_capsule_btrfs 00:07:11.165 ************************************ 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.165 ************************************ 00:07:11.165 START TEST filesystem_in_capsule_xfs 00:07:11.165 ************************************ 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:11.165 17:15:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:11.165 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:11.165 = sectsz=512 attr=2, projid32bit=1 00:07:11.165 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:11.165 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:11.165 data = bsize=4096 blocks=130560, imaxpct=25 00:07:11.165 = sunit=0 swidth=0 blks 00:07:11.165 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:11.165 log =internal log bsize=4096 blocks=16384, version=2 00:07:11.165 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:11.165 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:12.099 Discarding blocks...Done. 00:07:12.099 17:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:12.099 17:15:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3917098 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.003 00:07:14.003 real 0m2.646s 00:07:14.003 user 0m0.021s 00:07:14.003 sys 0m0.075s 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:14.003 ************************************ 00:07:14.003 END TEST filesystem_in_capsule_xfs 00:07:14.003 ************************************ 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3917098 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3917098 ']' 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3917098 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.003 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917098 00:07:14.262 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.262 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.262 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917098' 00:07:14.262 killing process with pid 3917098 00:07:14.262 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3917098 00:07:14.262 17:15:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3917098 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:14.522 00:07:14.522 real 0m12.502s 00:07:14.522 user 0m49.110s 00:07:14.522 sys 0m1.224s 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.522 ************************************ 00:07:14.522 END TEST nvmf_filesystem_in_capsule 00:07:14.522 ************************************ 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.522 rmmod nvme_tcp 00:07:14.522 rmmod nvme_fabrics 00:07:14.522 rmmod nvme_keyring 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.522 17:15:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.058 17:15:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.058 00:07:17.058 real 0m34.009s 00:07:17.058 user 1m44.716s 00:07:17.058 sys 0m6.568s 00:07:17.058 17:15:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.058 17:15:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.058 ************************************ 00:07:17.058 END TEST nvmf_filesystem 00:07:17.058 ************************************ 00:07:17.058 17:15:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:17.058 17:15:35 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:17.058 17:15:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.058 17:15:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.058 17:15:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.058 ************************************ 00:07:17.058 START TEST nvmf_target_discovery 00:07:17.058 ************************************ 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:17.058 * Looking for test storage... 00:07:17.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.058 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.059 17:15:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:22.327 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:22.327 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:22.327 Found net devices under 0000:86:00.0: cvl_0_0 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:22.327 Found net devices under 0000:86:00.1: cvl_0_1 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:07:22.327 00:07:22.327 --- 10.0.0.2 ping statistics --- 00:07:22.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.327 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:07:22.327 00:07:22.327 --- 10.0.0.1 ping statistics --- 00:07:22.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.327 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.327 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3922678 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3922678 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3922678 ']' 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.328 17:15:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.328 [2024-07-12 17:15:40.365201] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:07:22.328 [2024-07-12 17:15:40.365244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.328 [2024-07-12 17:15:40.422601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.328 [2024-07-12 17:15:40.494942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.328 [2024-07-12 17:15:40.494983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.328 [2024-07-12 17:15:40.494990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.328 [2024-07-12 17:15:40.494996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.328 [2024-07-12 17:15:40.495000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.328 [2024-07-12 17:15:40.495064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.328 [2024-07-12 17:15:40.495159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.328 [2024-07-12 17:15:40.495249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.328 [2024-07-12 17:15:40.495251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.586 [2024-07-12 17:15:41.220471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.586 Null1 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:22.586 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 [2024-07-12 17:15:41.265983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 Null2 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 Null3 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 Null4 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.587 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.845 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.845 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.845 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.845 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:22.846 00:07:22.846 Discovery Log Number of Records 6, Generation counter 6 00:07:22.846 =====Discovery Log Entry 0====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: current discovery subsystem 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4420 00:07:22.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: explicit discovery connections, duplicate discovery information 00:07:22.846 sectype: none 00:07:22.846 =====Discovery Log Entry 1====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: nvme subsystem 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4420 00:07:22.846 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: none 00:07:22.846 sectype: none 00:07:22.846 =====Discovery Log Entry 2====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: nvme subsystem 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4420 00:07:22.846 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: none 00:07:22.846 sectype: none 00:07:22.846 =====Discovery Log Entry 3====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: nvme subsystem 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4420 00:07:22.846 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: none 00:07:22.846 sectype: none 00:07:22.846 =====Discovery Log Entry 4====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: nvme subsystem 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4420 00:07:22.846 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: none 00:07:22.846 sectype: none 00:07:22.846 =====Discovery Log Entry 5====== 00:07:22.846 trtype: tcp 00:07:22.846 adrfam: ipv4 00:07:22.846 subtype: discovery subsystem referral 00:07:22.846 treq: not required 00:07:22.846 portid: 0 00:07:22.846 trsvcid: 4430 00:07:22.846 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:22.846 traddr: 10.0.0.2 00:07:22.846 eflags: none 00:07:22.846 sectype: none 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:22.846 Perform nvmf subsystem discovery via RPC 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.846 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.846 [ 00:07:22.846 { 00:07:22.846 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:22.846 "subtype": "Discovery", 00:07:22.846 "listen_addresses": [ 00:07:22.846 { 00:07:22.846 "trtype": "TCP", 00:07:22.846 "adrfam": "IPv4", 00:07:22.846 "traddr": "10.0.0.2", 00:07:22.846 "trsvcid": "4420" 00:07:22.846 } 00:07:22.846 ], 00:07:22.846 "allow_any_host": true, 00:07:22.846 "hosts": [] 00:07:22.846 }, 00:07:22.846 { 00:07:22.846 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:22.846 "subtype": "NVMe", 00:07:22.846 "listen_addresses": [ 00:07:22.846 { 00:07:22.846 "trtype": "TCP", 00:07:22.846 "adrfam": "IPv4", 00:07:22.846 "traddr": "10.0.0.2", 00:07:22.846 "trsvcid": "4420" 00:07:22.846 } 00:07:22.846 ], 00:07:22.846 "allow_any_host": true, 00:07:22.846 "hosts": [], 00:07:22.846 "serial_number": "SPDK00000000000001", 00:07:22.846 "model_number": "SPDK bdev Controller", 00:07:22.846 "max_namespaces": 32, 00:07:22.846 "min_cntlid": 1, 00:07:22.846 "max_cntlid": 65519, 00:07:22.846 "namespaces": [ 00:07:22.846 { 00:07:22.846 "nsid": 1, 00:07:22.846 "bdev_name": "Null1", 00:07:22.846 "name": "Null1", 00:07:22.846 "nguid": "28A67FEA719F41BCA76BCB6BFDE7BC9B", 00:07:22.846 "uuid": "28a67fea-719f-41bc-a76b-cb6bfde7bc9b" 00:07:22.846 } 00:07:22.846 ] 00:07:22.846 }, 00:07:22.846 { 00:07:22.846 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:22.846 "subtype": "NVMe", 00:07:22.846 "listen_addresses": [ 00:07:22.846 { 00:07:22.846 "trtype": "TCP", 00:07:22.846 "adrfam": "IPv4", 00:07:22.846 "traddr": "10.0.0.2", 00:07:22.846 "trsvcid": "4420" 00:07:22.846 } 00:07:22.846 ], 00:07:22.846 "allow_any_host": true, 00:07:22.846 "hosts": [], 00:07:22.846 "serial_number": "SPDK00000000000002", 00:07:22.846 "model_number": "SPDK bdev Controller", 00:07:22.846 "max_namespaces": 32, 00:07:22.846 "min_cntlid": 1, 00:07:22.846 "max_cntlid": 65519, 00:07:22.846 "namespaces": [ 00:07:22.846 { 00:07:22.846 "nsid": 1, 00:07:22.846 "bdev_name": "Null2", 00:07:22.846 "name": "Null2", 00:07:22.846 "nguid": "66C030670C2F4DC2B53DADFB1D624767", 00:07:22.846 "uuid": "66c03067-0c2f-4dc2-b53d-adfb1d624767" 00:07:22.846 } 00:07:22.846 ] 00:07:22.846 }, 00:07:22.846 { 00:07:22.846 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:22.846 "subtype": "NVMe", 00:07:22.846 "listen_addresses": [ 00:07:22.846 { 00:07:22.846 "trtype": "TCP", 00:07:22.846 "adrfam": "IPv4", 00:07:22.846 "traddr": "10.0.0.2", 00:07:22.846 "trsvcid": "4420" 00:07:22.846 } 00:07:22.846 ], 00:07:22.846 "allow_any_host": true, 00:07:22.846 "hosts": [], 00:07:22.846 "serial_number": "SPDK00000000000003", 00:07:22.846 "model_number": "SPDK bdev Controller", 00:07:22.846 "max_namespaces": 32, 00:07:22.846 "min_cntlid": 1, 00:07:22.846 "max_cntlid": 65519, 00:07:22.846 "namespaces": [ 00:07:22.846 { 00:07:22.846 "nsid": 1, 00:07:22.846 "bdev_name": "Null3", 00:07:22.846 "name": "Null3", 00:07:22.846 "nguid": "D32A62009D9E4F14B47C0944B10F9972", 00:07:22.846 "uuid": "d32a6200-9d9e-4f14-b47c-0944b10f9972" 00:07:22.846 } 00:07:22.846 ] 00:07:22.846 }, 00:07:22.846 { 00:07:22.846 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:22.846 "subtype": "NVMe", 00:07:22.846 "listen_addresses": [ 00:07:22.846 { 00:07:22.846 "trtype": "TCP", 00:07:22.846 "adrfam": "IPv4", 00:07:22.846 "traddr": "10.0.0.2", 00:07:22.846 "trsvcid": "4420" 00:07:22.846 } 00:07:22.846 ], 00:07:22.846 "allow_any_host": true, 00:07:22.846 "hosts": [], 00:07:22.846 "serial_number": "SPDK00000000000004", 00:07:22.846 "model_number": "SPDK bdev Controller", 00:07:22.847 "max_namespaces": 32, 00:07:22.847 "min_cntlid": 1, 00:07:22.847 "max_cntlid": 65519, 00:07:22.847 "namespaces": [ 00:07:22.847 { 00:07:22.847 "nsid": 1, 00:07:22.847 "bdev_name": "Null4", 00:07:22.847 "name": "Null4", 00:07:22.847 "nguid": "F0D7AA37D1BD425DB5644BCBEA4371AE", 00:07:22.847 "uuid": "f0d7aa37-d1bd-425d-b564-4bcbea4371ae" 00:07:22.847 } 00:07:22.847 ] 00:07:22.847 } 00:07:22.847 ] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:22.847 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.105 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.106 rmmod nvme_tcp 00:07:23.106 rmmod nvme_fabrics 00:07:23.106 rmmod nvme_keyring 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3922678 ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3922678 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3922678 ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3922678 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922678 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922678' 00:07:23.106 killing process with pid 3922678 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3922678 00:07:23.106 17:15:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3922678 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.365 17:15:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.356 17:15:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.356 00:07:25.356 real 0m8.691s 00:07:25.356 user 0m7.518s 00:07:25.356 sys 0m4.002s 00:07:25.356 17:15:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.356 17:15:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.356 ************************************ 00:07:25.356 END TEST nvmf_target_discovery 00:07:25.356 ************************************ 00:07:25.356 17:15:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.356 17:15:44 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:25.356 17:15:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.356 17:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.356 17:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.616 ************************************ 00:07:25.616 START TEST nvmf_referrals 00:07:25.616 ************************************ 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:25.616 * Looking for test storage... 00:07:25.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.616 17:15:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:30.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:30.885 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:30.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:30.886 Found net devices under 0000:86:00.0: cvl_0_0 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:30.886 Found net devices under 0000:86:00.1: cvl_0_1 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:30.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:07:30.886 00:07:30.886 --- 10.0.0.2 ping statistics --- 00:07:30.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.886 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:30.886 00:07:30.886 --- 10.0.0.1 ping statistics --- 00:07:30.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.886 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3926246 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3926246 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3926246 ']' 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.886 17:15:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:30.886 [2024-07-12 17:15:48.948489] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:07:30.886 [2024-07-12 17:15:48.948534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.886 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.886 [2024-07-12 17:15:49.005344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.886 [2024-07-12 17:15:49.087044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.886 [2024-07-12 17:15:49.087080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.886 [2024-07-12 17:15:49.087087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.886 [2024-07-12 17:15:49.087092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.886 [2024-07-12 17:15:49.087097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.886 [2024-07-12 17:15:49.087142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.886 [2024-07-12 17:15:49.087240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.886 [2024-07-12 17:15:49.087257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.886 [2024-07-12 17:15:49.087259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 [2024-07-12 17:15:49.799470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 [2024-07-12 17:15:49.812888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:31.144 17:15:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.402 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:31.659 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:31.917 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:32.174 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:32.174 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.175 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:32.432 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:32.433 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:32.433 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:32.433 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:32.433 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.433 17:15:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:32.433 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.689 rmmod nvme_tcp 00:07:32.689 rmmod nvme_fabrics 00:07:32.689 rmmod nvme_keyring 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3926246 ']' 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3926246 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3926246 ']' 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3926246 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926246 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.689 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.690 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926246' 00:07:32.690 killing process with pid 3926246 00:07:32.690 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3926246 00:07:32.690 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3926246 00:07:32.947 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.948 17:15:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.480 17:15:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.480 00:07:35.480 real 0m9.569s 00:07:35.480 user 0m12.178s 00:07:35.480 sys 0m4.118s 00:07:35.480 17:15:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.480 17:15:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:35.480 ************************************ 00:07:35.480 END TEST nvmf_referrals 00:07:35.480 ************************************ 00:07:35.480 17:15:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:35.480 17:15:53 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:35.480 17:15:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.480 17:15:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.480 17:15:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:35.480 ************************************ 00:07:35.480 START TEST nvmf_connect_disconnect 00:07:35.480 ************************************ 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:35.480 * Looking for test storage... 00:07:35.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.480 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.481 17:15:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.751 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:40.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:40.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:40.752 Found net devices under 0000:86:00.0: cvl_0_0 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:40.752 Found net devices under 0000:86:00.1: cvl_0_1 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:07:40.752 00:07:40.752 --- 10.0.0.2 ping statistics --- 00:07:40.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.752 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:40.752 00:07:40.752 --- 10.0.0.1 ping statistics --- 00:07:40.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.752 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3930314 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3930314 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3930314 ']' 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.752 17:15:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:40.752 [2024-07-12 17:15:59.510319] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:07:40.752 [2024-07-12 17:15:59.510363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.011 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.011 [2024-07-12 17:15:59.567022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.011 [2024-07-12 17:15:59.648143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.011 [2024-07-12 17:15:59.648178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.011 [2024-07-12 17:15:59.648185] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.011 [2024-07-12 17:15:59.648191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.011 [2024-07-12 17:15:59.648196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.011 [2024-07-12 17:15:59.648245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.011 [2024-07-12 17:15:59.648258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.011 [2024-07-12 17:15:59.648369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.011 [2024-07-12 17:15:59.648370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.577 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.577 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:41.577 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.577 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.577 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 [2024-07-12 17:16:00.374499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.849 [2024-07-12 17:16:00.426031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:41.849 17:16:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:45.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.316 rmmod nvme_tcp 00:07:58.316 rmmod nvme_fabrics 00:07:58.316 rmmod nvme_keyring 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3930314 ']' 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3930314 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3930314 ']' 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3930314 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3930314 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3930314' 00:07:58.316 killing process with pid 3930314 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3930314 00:07:58.316 17:16:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3930314 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.316 17:16:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.879 17:16:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.879 00:08:00.879 real 0m25.373s 00:08:00.879 user 1m11.006s 00:08:00.879 sys 0m5.273s 00:08:00.879 17:16:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.879 17:16:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:00.879 ************************************ 00:08:00.879 END TEST nvmf_connect_disconnect 00:08:00.879 ************************************ 00:08:00.879 17:16:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:00.879 17:16:19 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:00.879 17:16:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.879 17:16:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.879 17:16:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.879 ************************************ 00:08:00.879 START TEST nvmf_multitarget 00:08:00.879 ************************************ 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:00.879 * Looking for test storage... 00:08:00.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.879 17:16:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.161 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.161 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.161 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:08:06.420 00:08:06.420 --- 10.0.0.2 ping statistics --- 00:08:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.420 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:08:06.420 00:08:06.420 --- 10.0.0.1 ping statistics --- 00:08:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.420 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.420 17:16:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3936713 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3936713 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3936713 ']' 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.420 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:06.420 [2024-07-12 17:16:25.074671] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:08:06.420 [2024-07-12 17:16:25.074716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.420 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.420 [2024-07-12 17:16:25.132291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.678 [2024-07-12 17:16:25.207201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.678 [2024-07-12 17:16:25.207236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.678 [2024-07-12 17:16:25.207243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.678 [2024-07-12 17:16:25.207249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.678 [2024-07-12 17:16:25.207273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.678 [2024-07-12 17:16:25.207311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.678 [2024-07-12 17:16:25.207408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.678 [2024-07-12 17:16:25.207447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.678 [2024-07-12 17:16:25.207449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:07.243 17:16:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:07.243 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:07.243 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:07.502 "nvmf_tgt_1" 00:08:07.502 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:07.502 "nvmf_tgt_2" 00:08:07.502 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:07.502 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:07.760 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:07.760 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:07.760 true 00:08:07.760 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:07.760 true 00:08:07.760 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:07.760 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.019 rmmod nvme_tcp 00:08:08.019 rmmod nvme_fabrics 00:08:08.019 rmmod nvme_keyring 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3936713 ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3936713 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3936713 ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3936713 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3936713 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3936713' 00:08:08.019 killing process with pid 3936713 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3936713 00:08:08.019 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3936713 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.278 17:16:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.814 17:16:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.814 00:08:10.814 real 0m9.753s 00:08:10.814 user 0m9.052s 00:08:10.814 sys 0m4.735s 00:08:10.814 17:16:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.814 17:16:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:10.814 ************************************ 00:08:10.814 END TEST nvmf_multitarget 00:08:10.814 ************************************ 00:08:10.814 17:16:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.814 17:16:29 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:10.814 17:16:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.814 17:16:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.814 17:16:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.814 ************************************ 00:08:10.814 START TEST nvmf_rpc 00:08:10.814 ************************************ 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:10.814 * Looking for test storage... 00:08:10.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.814 17:16:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.815 17:16:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.077 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:16.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:16.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:16.078 Found net devices under 0000:86:00.0: cvl_0_0 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:16.078 Found net devices under 0000:86:00.1: cvl_0_1 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:08:16.078 00:08:16.078 --- 10.0.0.2 ping statistics --- 00:08:16.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.078 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:08:16.078 00:08:16.078 --- 10.0.0.1 ping statistics --- 00:08:16.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.078 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3940498 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3940498 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3940498 ']' 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.078 17:16:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:16.078 [2024-07-12 17:16:34.649030] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:08:16.078 [2024-07-12 17:16:34.649069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.078 [2024-07-12 17:16:34.706728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.078 [2024-07-12 17:16:34.787826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.078 [2024-07-12 17:16:34.787865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.078 [2024-07-12 17:16:34.787872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.078 [2024-07-12 17:16:34.787881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.078 [2024-07-12 17:16:34.787887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.078 [2024-07-12 17:16:34.787929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.078 [2024-07-12 17:16:34.788027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.078 [2024-07-12 17:16:34.788058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.078 [2024-07-12 17:16:34.788057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:17.014 "tick_rate": 2300000000, 00:08:17.014 "poll_groups": [ 00:08:17.014 { 00:08:17.014 "name": "nvmf_tgt_poll_group_000", 00:08:17.014 "admin_qpairs": 0, 00:08:17.014 "io_qpairs": 0, 00:08:17.014 "current_admin_qpairs": 0, 00:08:17.014 "current_io_qpairs": 0, 00:08:17.014 "pending_bdev_io": 0, 00:08:17.014 "completed_nvme_io": 0, 00:08:17.014 "transports": [] 00:08:17.014 }, 00:08:17.014 { 00:08:17.014 "name": "nvmf_tgt_poll_group_001", 00:08:17.014 "admin_qpairs": 0, 00:08:17.014 "io_qpairs": 0, 00:08:17.014 "current_admin_qpairs": 0, 00:08:17.014 "current_io_qpairs": 0, 00:08:17.014 "pending_bdev_io": 0, 00:08:17.014 "completed_nvme_io": 0, 00:08:17.014 "transports": [] 00:08:17.014 }, 00:08:17.014 { 00:08:17.014 "name": "nvmf_tgt_poll_group_002", 00:08:17.014 "admin_qpairs": 0, 00:08:17.014 "io_qpairs": 0, 00:08:17.014 "current_admin_qpairs": 0, 00:08:17.014 "current_io_qpairs": 0, 00:08:17.014 "pending_bdev_io": 0, 00:08:17.014 "completed_nvme_io": 0, 00:08:17.014 "transports": [] 00:08:17.014 }, 00:08:17.014 { 00:08:17.014 "name": "nvmf_tgt_poll_group_003", 00:08:17.014 "admin_qpairs": 0, 00:08:17.014 "io_qpairs": 0, 00:08:17.014 "current_admin_qpairs": 0, 00:08:17.014 "current_io_qpairs": 0, 00:08:17.014 "pending_bdev_io": 0, 00:08:17.014 "completed_nvme_io": 0, 00:08:17.014 "transports": [] 00:08:17.014 } 00:08:17.014 ] 00:08:17.014 }' 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 [2024-07-12 17:16:35.611675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.014 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:17.014 "tick_rate": 2300000000, 00:08:17.014 "poll_groups": [ 00:08:17.014 { 00:08:17.014 "name": "nvmf_tgt_poll_group_000", 00:08:17.014 "admin_qpairs": 0, 00:08:17.014 "io_qpairs": 0, 00:08:17.014 "current_admin_qpairs": 0, 00:08:17.014 "current_io_qpairs": 0, 00:08:17.014 "pending_bdev_io": 0, 00:08:17.015 "completed_nvme_io": 0, 00:08:17.015 "transports": [ 00:08:17.015 { 00:08:17.015 "trtype": "TCP" 00:08:17.015 } 00:08:17.015 ] 00:08:17.015 }, 00:08:17.015 { 00:08:17.015 "name": "nvmf_tgt_poll_group_001", 00:08:17.015 "admin_qpairs": 0, 00:08:17.015 "io_qpairs": 0, 00:08:17.015 "current_admin_qpairs": 0, 00:08:17.015 "current_io_qpairs": 0, 00:08:17.015 "pending_bdev_io": 0, 00:08:17.015 "completed_nvme_io": 0, 00:08:17.015 "transports": [ 00:08:17.015 { 00:08:17.015 "trtype": "TCP" 00:08:17.015 } 00:08:17.015 ] 00:08:17.015 }, 00:08:17.015 { 00:08:17.015 "name": "nvmf_tgt_poll_group_002", 00:08:17.015 "admin_qpairs": 0, 00:08:17.015 "io_qpairs": 0, 00:08:17.015 "current_admin_qpairs": 0, 00:08:17.015 "current_io_qpairs": 0, 00:08:17.015 "pending_bdev_io": 0, 00:08:17.015 "completed_nvme_io": 0, 00:08:17.015 "transports": [ 00:08:17.015 { 00:08:17.015 "trtype": "TCP" 00:08:17.015 } 00:08:17.015 ] 00:08:17.015 }, 00:08:17.015 { 00:08:17.015 "name": "nvmf_tgt_poll_group_003", 00:08:17.015 "admin_qpairs": 0, 00:08:17.015 "io_qpairs": 0, 00:08:17.015 "current_admin_qpairs": 0, 00:08:17.015 "current_io_qpairs": 0, 00:08:17.015 "pending_bdev_io": 0, 00:08:17.015 "completed_nvme_io": 0, 00:08:17.015 "transports": [ 00:08:17.015 { 00:08:17.015 "trtype": "TCP" 00:08:17.015 } 00:08:17.015 ] 00:08:17.015 } 00:08:17.015 ] 00:08:17.015 }' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 Malloc1 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.015 [2024-07-12 17:16:35.779731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:17.015 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:17.274 [2024-07-12 17:16:35.808316] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:17.274 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:17.274 could not add new controller: failed to write to nvme-fabrics device 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.274 17:16:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.209 17:16:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.209 17:16:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:18.209 17:16:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.209 17:16:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:18.209 17:16:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:20.742 17:16:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.742 [2024-07-12 17:16:39.103910] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:20.742 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:20.742 could not add new controller: failed to write to nvme-fabrics device 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.742 17:16:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.678 17:16:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.678 17:16:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.678 17:16:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.678 17:16:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:21.678 17:16:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:23.581 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.839 [2024-07-12 17:16:42.498209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.839 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.840 17:16:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.214 17:16:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.214 17:16:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:25.214 17:16:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.214 17:16:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:25.214 17:16:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.118 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 [2024-07-12 17:16:45.797803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.119 17:16:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.554 17:16:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.554 17:16:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.554 17:16:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.554 17:16:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:28.554 17:16:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 [2024-07-12 17:16:49.140327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.458 17:16:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:31.834 17:16:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.834 17:16:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:31.834 17:16:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.834 17:16:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:31.834 17:16:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 [2024-07-12 17:16:52.428085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.735 17:16:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.109 17:16:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.109 17:16:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.109 17:16:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.109 17:16:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.109 17:16:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 [2024-07-12 17:16:55.755330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.010 17:16:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:38.379 17:16:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:38.379 17:16:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:38.379 17:16:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.379 17:16:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:38.379 17:16:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:40.276 17:16:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 [2024-07-12 17:16:59.135683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.534 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-07-12 17:16:59.183799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-07-12 17:16:59.235973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 [2024-07-12 17:16:59.284153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.535 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 [2024-07-12 17:16:59.332307] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:40.793 "tick_rate": 2300000000, 00:08:40.793 "poll_groups": [ 00:08:40.793 { 00:08:40.793 "name": "nvmf_tgt_poll_group_000", 00:08:40.793 "admin_qpairs": 2, 00:08:40.793 "io_qpairs": 168, 00:08:40.793 "current_admin_qpairs": 0, 00:08:40.793 "current_io_qpairs": 0, 00:08:40.793 "pending_bdev_io": 0, 00:08:40.793 "completed_nvme_io": 270, 00:08:40.793 "transports": [ 00:08:40.793 { 00:08:40.793 "trtype": "TCP" 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 }, 00:08:40.793 { 00:08:40.793 "name": "nvmf_tgt_poll_group_001", 00:08:40.793 "admin_qpairs": 2, 00:08:40.793 "io_qpairs": 168, 00:08:40.793 "current_admin_qpairs": 0, 00:08:40.793 "current_io_qpairs": 0, 00:08:40.793 "pending_bdev_io": 0, 00:08:40.793 "completed_nvme_io": 414, 00:08:40.793 "transports": [ 00:08:40.793 { 00:08:40.793 "trtype": "TCP" 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 }, 00:08:40.793 { 00:08:40.793 "name": "nvmf_tgt_poll_group_002", 00:08:40.793 "admin_qpairs": 1, 00:08:40.793 "io_qpairs": 168, 00:08:40.793 "current_admin_qpairs": 0, 00:08:40.793 "current_io_qpairs": 0, 00:08:40.793 "pending_bdev_io": 0, 00:08:40.793 "completed_nvme_io": 170, 00:08:40.793 "transports": [ 00:08:40.793 { 00:08:40.793 "trtype": "TCP" 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 }, 00:08:40.793 { 00:08:40.793 "name": "nvmf_tgt_poll_group_003", 00:08:40.793 "admin_qpairs": 2, 00:08:40.793 "io_qpairs": 168, 00:08:40.793 "current_admin_qpairs": 0, 00:08:40.793 "current_io_qpairs": 0, 00:08:40.793 "pending_bdev_io": 0, 00:08:40.793 "completed_nvme_io": 168, 00:08:40.793 "transports": [ 00:08:40.793 { 00:08:40.793 "trtype": "TCP" 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 }' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.793 rmmod nvme_tcp 00:08:40.793 rmmod nvme_fabrics 00:08:40.793 rmmod nvme_keyring 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:40.793 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3940498 ']' 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3940498 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3940498 ']' 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3940498 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.794 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3940498 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3940498' 00:08:41.052 killing process with pid 3940498 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3940498 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3940498 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.052 17:16:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.582 17:17:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.582 00:08:43.582 real 0m32.813s 00:08:43.582 user 1m41.286s 00:08:43.582 sys 0m5.822s 00:08:43.582 17:17:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.582 17:17:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.582 ************************************ 00:08:43.582 END TEST nvmf_rpc 00:08:43.582 ************************************ 00:08:43.582 17:17:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:43.582 17:17:01 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:43.582 17:17:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.582 17:17:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.582 17:17:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.582 ************************************ 00:08:43.582 START TEST nvmf_invalid 00:08:43.582 ************************************ 00:08:43.582 17:17:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:43.582 * Looking for test storage... 00:08:43.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:43.582 17:17:02 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.583 17:17:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.855 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.856 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.856 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.856 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.856 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:48.856 00:08:48.856 --- 10.0.0.2 ping statistics --- 00:08:48.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.856 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:08:48.856 00:08:48.856 --- 10.0.0.1 ping statistics --- 00:08:48.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.856 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3948183 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3948183 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3948183 ']' 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.856 17:17:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:48.856 [2024-07-12 17:17:07.405322] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:08:48.856 [2024-07-12 17:17:07.405369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.856 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.856 [2024-07-12 17:17:07.464518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.856 [2024-07-12 17:17:07.546060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.856 [2024-07-12 17:17:07.546096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.856 [2024-07-12 17:17:07.546103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.856 [2024-07-12 17:17:07.546109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.856 [2024-07-12 17:17:07.546114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.856 [2024-07-12 17:17:07.546156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.856 [2024-07-12 17:17:07.546250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.856 [2024-07-12 17:17:07.546336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.856 [2024-07-12 17:17:07.546337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19504 00:08:49.791 [2024-07-12 17:17:08.409832] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:49.791 { 00:08:49.791 "nqn": "nqn.2016-06.io.spdk:cnode19504", 00:08:49.791 "tgt_name": "foobar", 00:08:49.791 "method": "nvmf_create_subsystem", 00:08:49.791 "req_id": 1 00:08:49.791 } 00:08:49.791 Got JSON-RPC error response 00:08:49.791 response: 00:08:49.791 { 00:08:49.791 "code": -32603, 00:08:49.791 "message": "Unable to find target foobar" 00:08:49.791 }' 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:49.791 { 00:08:49.791 "nqn": "nqn.2016-06.io.spdk:cnode19504", 00:08:49.791 "tgt_name": "foobar", 00:08:49.791 "method": "nvmf_create_subsystem", 00:08:49.791 "req_id": 1 00:08:49.791 } 00:08:49.791 Got JSON-RPC error response 00:08:49.791 response: 00:08:49.791 { 00:08:49.791 "code": -32603, 00:08:49.791 "message": "Unable to find target foobar" 00:08:49.791 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:49.791 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15932 00:08:50.049 [2024-07-12 17:17:08.598493] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15932: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:50.049 { 00:08:50.049 "nqn": "nqn.2016-06.io.spdk:cnode15932", 00:08:50.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:50.049 "method": "nvmf_create_subsystem", 00:08:50.049 "req_id": 1 00:08:50.049 } 00:08:50.049 Got JSON-RPC error response 00:08:50.049 response: 00:08:50.049 { 00:08:50.049 "code": -32602, 00:08:50.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:50.049 }' 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:50.049 { 00:08:50.049 "nqn": "nqn.2016-06.io.spdk:cnode15932", 00:08:50.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:50.049 "method": "nvmf_create_subsystem", 00:08:50.049 "req_id": 1 00:08:50.049 } 00:08:50.049 Got JSON-RPC error response 00:08:50.049 response: 00:08:50.049 { 00:08:50.049 "code": -32602, 00:08:50.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:50.049 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12232 00:08:50.049 [2024-07-12 17:17:08.783057] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12232: invalid model number 'SPDK_Controller' 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:50.049 { 00:08:50.049 "nqn": "nqn.2016-06.io.spdk:cnode12232", 00:08:50.049 "model_number": "SPDK_Controller\u001f", 00:08:50.049 "method": "nvmf_create_subsystem", 00:08:50.049 "req_id": 1 00:08:50.049 } 00:08:50.049 Got JSON-RPC error response 00:08:50.049 response: 00:08:50.049 { 00:08:50.049 "code": -32602, 00:08:50.049 "message": "Invalid MN SPDK_Controller\u001f" 00:08:50.049 }' 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:50.049 { 00:08:50.049 "nqn": "nqn.2016-06.io.spdk:cnode12232", 00:08:50.049 "model_number": "SPDK_Controller\u001f", 00:08:50.049 "method": "nvmf_create_subsystem", 00:08:50.049 "req_id": 1 00:08:50.049 } 00:08:50.049 Got JSON-RPC error response 00:08:50.049 response: 00:08:50.049 { 00:08:50.049 "code": -32602, 00:08:50.049 "message": "Invalid MN SPDK_Controller\u001f" 00:08:50.049 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:50.049 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.050 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\^g,cM|^,/|g/1,Wik<' 00:08:50.308 17:17:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\^g,cM|^,/|g/1,Wik<' nqn.2016-06.io.spdk:cnode20536 00:08:50.567 [2024-07-12 17:17:09.104174] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20536: invalid serial number '\^g,cM|^,/|g/1,Wik<' 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:50.567 { 00:08:50.567 "nqn": "nqn.2016-06.io.spdk:cnode20536", 00:08:50.567 "serial_number": "\\^\u007fg,cM|^,/|g\u007f/1,Wik<", 00:08:50.567 "method": "nvmf_create_subsystem", 00:08:50.567 "req_id": 1 00:08:50.567 } 00:08:50.567 Got JSON-RPC error response 00:08:50.567 response: 00:08:50.567 { 00:08:50.567 "code": -32602, 00:08:50.567 "message": "Invalid SN \\^\u007fg,cM|^,/|g\u007f/1,Wik<" 00:08:50.567 }' 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:50.567 { 00:08:50.567 "nqn": "nqn.2016-06.io.spdk:cnode20536", 00:08:50.567 "serial_number": "\\^\u007fg,cM|^,/|g\u007f/1,Wik<", 00:08:50.567 "method": "nvmf_create_subsystem", 00:08:50.567 "req_id": 1 00:08:50.567 } 00:08:50.567 Got JSON-RPC error response 00:08:50.567 response: 00:08:50.567 { 00:08:50.567 "code": -32602, 00:08:50.567 "message": "Invalid SN \\^\u007fg,cM|^,/|g\u007f/1,Wik<" 00:08:50.567 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:50.567 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:50.568 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.569 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$' nqn.2016-06.io.spdk:cnode29458 00:08:50.827 [2024-07-12 17:17:09.541767] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29458: invalid model number '1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:50.827 { 00:08:50.827 "nqn": "nqn.2016-06.io.spdk:cnode29458", 00:08:50.827 "model_number": "1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$", 00:08:50.827 "method": "nvmf_create_subsystem", 00:08:50.827 "req_id": 1 00:08:50.827 } 00:08:50.827 Got JSON-RPC error response 00:08:50.827 response: 00:08:50.827 { 00:08:50.827 "code": -32602, 00:08:50.827 "message": "Invalid MN 1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$" 00:08:50.827 }' 00:08:50.827 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:50.827 { 00:08:50.827 "nqn": "nqn.2016-06.io.spdk:cnode29458", 00:08:50.828 "model_number": "1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$", 00:08:50.828 "method": "nvmf_create_subsystem", 00:08:50.828 "req_id": 1 00:08:50.828 } 00:08:50.828 Got JSON-RPC error response 00:08:50.828 response: 00:08:50.828 { 00:08:50.828 "code": -32602, 00:08:50.828 "message": "Invalid MN 1qIb91 D|@?TL^^PK4,/6=|-/5Jw#^u,@cQ#?_j{$" 00:08:50.828 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:50.828 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:51.086 [2024-07-12 17:17:09.726442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.086 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:51.343 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:51.343 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:51.343 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:51.343 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:51.343 17:17:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:51.343 [2024-07-12 17:17:10.117090] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:51.601 { 00:08:51.601 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:51.601 "listen_address": { 00:08:51.601 "trtype": "tcp", 00:08:51.601 "traddr": "", 00:08:51.601 "trsvcid": "4421" 00:08:51.601 }, 00:08:51.601 "method": "nvmf_subsystem_remove_listener", 00:08:51.601 "req_id": 1 00:08:51.601 } 00:08:51.601 Got JSON-RPC error response 00:08:51.601 response: 00:08:51.601 { 00:08:51.601 "code": -32602, 00:08:51.601 "message": "Invalid parameters" 00:08:51.601 }' 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:51.601 { 00:08:51.601 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:51.601 "listen_address": { 00:08:51.601 "trtype": "tcp", 00:08:51.601 "traddr": "", 00:08:51.601 "trsvcid": "4421" 00:08:51.601 }, 00:08:51.601 "method": "nvmf_subsystem_remove_listener", 00:08:51.601 "req_id": 1 00:08:51.601 } 00:08:51.601 Got JSON-RPC error response 00:08:51.601 response: 00:08:51.601 { 00:08:51.601 "code": -32602, 00:08:51.601 "message": "Invalid parameters" 00:08:51.601 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2106 -i 0 00:08:51.601 [2024-07-12 17:17:10.317728] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2106: invalid cntlid range [0-65519] 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:51.601 { 00:08:51.601 "nqn": "nqn.2016-06.io.spdk:cnode2106", 00:08:51.601 "min_cntlid": 0, 00:08:51.601 "method": "nvmf_create_subsystem", 00:08:51.601 "req_id": 1 00:08:51.601 } 00:08:51.601 Got JSON-RPC error response 00:08:51.601 response: 00:08:51.601 { 00:08:51.601 "code": -32602, 00:08:51.601 "message": "Invalid cntlid range [0-65519]" 00:08:51.601 }' 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:51.601 { 00:08:51.601 "nqn": "nqn.2016-06.io.spdk:cnode2106", 00:08:51.601 "min_cntlid": 0, 00:08:51.601 "method": "nvmf_create_subsystem", 00:08:51.601 "req_id": 1 00:08:51.601 } 00:08:51.601 Got JSON-RPC error response 00:08:51.601 response: 00:08:51.601 { 00:08:51.601 "code": -32602, 00:08:51.601 "message": "Invalid cntlid range [0-65519]" 00:08:51.601 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:51.601 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22396 -i 65520 00:08:51.859 [2024-07-12 17:17:10.510364] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22396: invalid cntlid range [65520-65519] 00:08:51.859 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:51.859 { 00:08:51.859 "nqn": "nqn.2016-06.io.spdk:cnode22396", 00:08:51.859 "min_cntlid": 65520, 00:08:51.859 "method": "nvmf_create_subsystem", 00:08:51.859 "req_id": 1 00:08:51.859 } 00:08:51.859 Got JSON-RPC error response 00:08:51.859 response: 00:08:51.859 { 00:08:51.859 "code": -32602, 00:08:51.859 "message": "Invalid cntlid range [65520-65519]" 00:08:51.859 }' 00:08:51.859 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:51.859 { 00:08:51.859 "nqn": "nqn.2016-06.io.spdk:cnode22396", 00:08:51.859 "min_cntlid": 65520, 00:08:51.859 "method": "nvmf_create_subsystem", 00:08:51.859 "req_id": 1 00:08:51.859 } 00:08:51.859 Got JSON-RPC error response 00:08:51.859 response: 00:08:51.859 { 00:08:51.859 "code": -32602, 00:08:51.859 "message": "Invalid cntlid range [65520-65519]" 00:08:51.859 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:51.859 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6581 -I 0 00:08:52.117 [2024-07-12 17:17:10.695049] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6581: invalid cntlid range [1-0] 00:08:52.117 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:52.117 { 00:08:52.117 "nqn": "nqn.2016-06.io.spdk:cnode6581", 00:08:52.117 "max_cntlid": 0, 00:08:52.117 "method": "nvmf_create_subsystem", 00:08:52.117 "req_id": 1 00:08:52.117 } 00:08:52.117 Got JSON-RPC error response 00:08:52.117 response: 00:08:52.117 { 00:08:52.117 "code": -32602, 00:08:52.117 "message": "Invalid cntlid range [1-0]" 00:08:52.117 }' 00:08:52.117 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:52.117 { 00:08:52.117 "nqn": "nqn.2016-06.io.spdk:cnode6581", 00:08:52.117 "max_cntlid": 0, 00:08:52.117 "method": "nvmf_create_subsystem", 00:08:52.117 "req_id": 1 00:08:52.117 } 00:08:52.117 Got JSON-RPC error response 00:08:52.117 response: 00:08:52.117 { 00:08:52.117 "code": -32602, 00:08:52.117 "message": "Invalid cntlid range [1-0]" 00:08:52.117 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:52.117 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15038 -I 65520 00:08:52.117 [2024-07-12 17:17:10.871615] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15038: invalid cntlid range [1-65520] 00:08:52.375 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:52.375 { 00:08:52.375 "nqn": "nqn.2016-06.io.spdk:cnode15038", 00:08:52.375 "max_cntlid": 65520, 00:08:52.375 "method": "nvmf_create_subsystem", 00:08:52.375 "req_id": 1 00:08:52.375 } 00:08:52.375 Got JSON-RPC error response 00:08:52.375 response: 00:08:52.375 { 00:08:52.375 "code": -32602, 00:08:52.376 "message": "Invalid cntlid range [1-65520]" 00:08:52.376 }' 00:08:52.376 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:52.376 { 00:08:52.376 "nqn": "nqn.2016-06.io.spdk:cnode15038", 00:08:52.376 "max_cntlid": 65520, 00:08:52.376 "method": "nvmf_create_subsystem", 00:08:52.376 "req_id": 1 00:08:52.376 } 00:08:52.376 Got JSON-RPC error response 00:08:52.376 response: 00:08:52.376 { 00:08:52.376 "code": -32602, 00:08:52.376 "message": "Invalid cntlid range [1-65520]" 00:08:52.376 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:52.376 17:17:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29025 -i 6 -I 5 00:08:52.376 [2024-07-12 17:17:11.044213] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29025: invalid cntlid range [6-5] 00:08:52.376 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:52.376 { 00:08:52.376 "nqn": "nqn.2016-06.io.spdk:cnode29025", 00:08:52.376 "min_cntlid": 6, 00:08:52.376 "max_cntlid": 5, 00:08:52.376 "method": "nvmf_create_subsystem", 00:08:52.376 "req_id": 1 00:08:52.376 } 00:08:52.376 Got JSON-RPC error response 00:08:52.376 response: 00:08:52.376 { 00:08:52.376 "code": -32602, 00:08:52.376 "message": "Invalid cntlid range [6-5]" 00:08:52.376 }' 00:08:52.376 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:52.376 { 00:08:52.376 "nqn": "nqn.2016-06.io.spdk:cnode29025", 00:08:52.376 "min_cntlid": 6, 00:08:52.376 "max_cntlid": 5, 00:08:52.376 "method": "nvmf_create_subsystem", 00:08:52.376 "req_id": 1 00:08:52.376 } 00:08:52.376 Got JSON-RPC error response 00:08:52.376 response: 00:08:52.376 { 00:08:52.376 "code": -32602, 00:08:52.376 "message": "Invalid cntlid range [6-5]" 00:08:52.376 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:52.376 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:52.635 { 00:08:52.635 "name": "foobar", 00:08:52.635 "method": "nvmf_delete_target", 00:08:52.635 "req_id": 1 00:08:52.635 } 00:08:52.635 Got JSON-RPC error response 00:08:52.635 response: 00:08:52.635 { 00:08:52.635 "code": -32602, 00:08:52.635 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:52.635 }' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:52.635 { 00:08:52.635 "name": "foobar", 00:08:52.635 "method": "nvmf_delete_target", 00:08:52.635 "req_id": 1 00:08:52.635 } 00:08:52.635 Got JSON-RPC error response 00:08:52.635 response: 00:08:52.635 { 00:08:52.635 "code": -32602, 00:08:52.635 "message": "The specified target doesn't exist, cannot delete it." 00:08:52.635 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.635 rmmod nvme_tcp 00:08:52.635 rmmod nvme_fabrics 00:08:52.635 rmmod nvme_keyring 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3948183 ']' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3948183 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3948183 ']' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3948183 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948183 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948183' 00:08:52.635 killing process with pid 3948183 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3948183 00:08:52.635 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3948183 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.895 17:17:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.903 17:17:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.903 00:08:54.903 real 0m11.586s 00:08:54.903 user 0m19.343s 00:08:54.903 sys 0m4.940s 00:08:54.903 17:17:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.903 17:17:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:54.903 ************************************ 00:08:54.903 END TEST nvmf_invalid 00:08:54.903 ************************************ 00:08:54.903 17:17:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.903 17:17:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:54.903 17:17:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.903 17:17:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.903 17:17:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.903 ************************************ 00:08:54.903 START TEST nvmf_abort 00:08:54.903 ************************************ 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:54.903 * Looking for test storage... 00:08:54.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.903 17:17:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.904 17:17:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.904 17:17:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.904 17:17:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.904 17:17:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:55.163 17:17:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.164 17:17:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:00.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.444 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:00.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:00.445 Found net devices under 0000:86:00.0: cvl_0_0 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:00.445 Found net devices under 0000:86:00.1: cvl_0_1 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:09:00.445 00:09:00.445 --- 10.0.0.2 ping statistics --- 00:09:00.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.445 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:09:00.445 00:09:00.445 --- 10.0.0.1 ping statistics --- 00:09:00.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.445 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3952490 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3952490 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3952490 ']' 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.445 17:17:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:00.445 [2024-07-12 17:17:19.023674] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:09:00.445 [2024-07-12 17:17:19.023714] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.445 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.445 [2024-07-12 17:17:19.078971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:00.445 [2024-07-12 17:17:19.157868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.445 [2024-07-12 17:17:19.157904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.445 [2024-07-12 17:17:19.157911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.445 [2024-07-12 17:17:19.157917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.446 [2024-07-12 17:17:19.157922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.446 [2024-07-12 17:17:19.158028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.446 [2024-07-12 17:17:19.158132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.446 [2024-07-12 17:17:19.158134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 [2024-07-12 17:17:19.870401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 Malloc0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 Delay0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 [2024-07-12 17:17:19.938416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.383 17:17:19 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:01.383 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.383 [2024-07-12 17:17:20.070481] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:03.920 Initializing NVMe Controllers 00:09:03.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.920 controller IO queue size 128 less than required 00:09:03.920 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:03.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:03.920 Initialization complete. Launching workers. 00:09:03.920 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41287 00:09:03.920 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41348, failed to submit 62 00:09:03.920 success 41291, unsuccess 57, failed 0 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:03.920 17:17:22 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.921 rmmod nvme_tcp 00:09:03.921 rmmod nvme_fabrics 00:09:03.921 rmmod nvme_keyring 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3952490 ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3952490 ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3952490' 00:09:03.921 killing process with pid 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3952490 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.921 17:17:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.824 17:17:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.824 00:09:05.825 real 0m10.907s 00:09:05.825 user 0m12.832s 00:09:05.825 sys 0m4.969s 00:09:05.825 17:17:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.825 17:17:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.825 ************************************ 00:09:05.825 END TEST nvmf_abort 00:09:05.825 ************************************ 00:09:05.825 17:17:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:05.825 17:17:24 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:05.825 17:17:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.825 17:17:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.825 17:17:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.825 ************************************ 00:09:05.825 START TEST nvmf_ns_hotplug_stress 00:09:05.825 ************************************ 00:09:05.825 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:06.084 * Looking for test storage... 00:09:06.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.084 17:17:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:11.353 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:11.353 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:11.353 Found net devices under 0000:86:00.0: cvl_0_0 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:11.353 Found net devices under 0000:86:00.1: cvl_0_1 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:11.353 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:11.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:09:11.354 00:09:11.354 --- 10.0.0.2 ping statistics --- 00:09:11.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.354 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:09:11.354 00:09:11.354 --- 10.0.0.1 ping statistics --- 00:09:11.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.354 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3956487 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3956487 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3956487 ']' 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.354 17:17:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.354 [2024-07-12 17:17:29.955504] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:09:11.354 [2024-07-12 17:17:29.955546] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.354 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.354 [2024-07-12 17:17:30.011702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.354 [2024-07-12 17:17:30.099693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.354 [2024-07-12 17:17:30.099729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.354 [2024-07-12 17:17:30.099736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.354 [2024-07-12 17:17:30.099742] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.354 [2024-07-12 17:17:30.099747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.354 [2024-07-12 17:17:30.099859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.354 [2024-07-12 17:17:30.099949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.354 [2024-07-12 17:17:30.099950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.291 [2024-07-12 17:17:30.955900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.291 17:17:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.550 17:17:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.809 [2024-07-12 17:17:31.337242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.809 17:17:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.809 17:17:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:13.066 Malloc0 00:09:13.066 17:17:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.324 Delay0 00:09:13.324 17:17:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.324 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:13.582 NULL1 00:09:13.582 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:13.841 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3956816 00:09:13.841 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:13.841 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:13.841 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.841 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.841 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.098 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:14.098 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:14.356 true 00:09:14.356 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:14.356 17:17:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.615 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.615 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:14.615 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:14.873 true 00:09:14.873 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:14.873 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.132 Read completed with error (sct=0, sc=11) 00:09:15.132 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.132 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:15.132 17:17:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:15.391 true 00:09:15.391 17:17:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:15.391 17:17:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.326 17:17:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.326 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:16.326 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:16.584 true 00:09:16.584 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:16.584 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.842 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.101 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:17.101 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:17.101 true 00:09:17.101 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:17.101 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.359 17:17:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.618 17:17:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:17.618 17:17:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:17.618 true 00:09:17.618 17:17:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:17.618 17:17:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.554 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.836 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:18.836 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:18.836 true 00:09:18.836 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:18.836 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.095 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.354 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:19.354 17:17:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:19.354 true 00:09:19.354 17:17:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:19.354 17:17:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 17:17:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.731 17:17:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:20.731 17:17:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:20.989 true 00:09:20.989 17:17:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:20.989 17:17:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.981 17:17:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.981 17:17:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:21.981 17:17:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:22.240 true 00:09:22.240 17:17:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:22.241 17:17:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.241 17:17:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.499 17:17:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:22.499 17:17:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:22.758 true 00:09:22.758 17:17:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:22.758 17:17:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 17:17:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.954 17:17:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:23.954 17:17:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:24.213 true 00:09:24.213 17:17:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:24.214 17:17:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.149 17:17:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.149 17:17:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:25.149 17:17:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:25.408 true 00:09:25.408 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:25.408 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.667 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.667 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:25.667 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:25.926 true 00:09:25.926 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:25.926 17:17:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 17:17:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.304 17:17:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:27.304 17:17:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:27.562 true 00:09:27.562 17:17:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:27.562 17:17:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.498 17:17:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.498 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:28.498 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:28.757 true 00:09:28.757 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:28.757 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.757 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.016 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:29.016 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:29.275 true 00:09:29.275 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:29.275 17:17:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 17:17:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.652 17:17:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:30.652 17:17:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:30.911 true 00:09:30.911 17:17:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:30.911 17:17:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.847 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.847 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:31.847 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:32.106 true 00:09:32.106 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:32.106 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.106 17:17:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.365 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:32.365 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:32.624 true 00:09:32.624 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:32.624 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.882 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.882 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:32.882 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:33.141 true 00:09:33.141 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:33.141 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.400 17:17:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.400 17:17:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:33.400 17:17:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:33.659 true 00:09:33.659 17:17:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:33.659 17:17:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 17:17:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.042 17:17:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:35.042 17:17:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:35.300 true 00:09:35.300 17:17:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:35.300 17:17:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.234 17:17:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.235 17:17:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:36.235 17:17:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:36.492 true 00:09:36.492 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:36.492 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.492 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.749 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:36.749 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:37.007 true 00:09:37.007 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:37.007 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.007 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.264 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:37.264 17:17:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:37.522 true 00:09:37.522 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:37.522 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.780 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.780 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:37.780 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:38.038 true 00:09:38.038 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:38.038 17:17:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.021 17:17:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.280 17:17:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:39.280 17:17:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:39.538 true 00:09:39.538 17:17:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:39.538 17:17:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.475 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.475 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:40.475 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:40.734 true 00:09:40.734 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:40.734 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.992 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.992 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:40.992 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:41.251 true 00:09:41.251 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:41.251 17:17:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.628 17:18:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.628 17:18:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:42.628 17:18:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:42.886 true 00:09:42.886 17:18:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:42.886 17:18:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.821 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.821 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:43.821 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:44.084 true 00:09:44.084 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:44.084 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.084 Initializing NVMe Controllers 00:09:44.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.084 Controller IO queue size 128, less than required. 00:09:44.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.084 Controller IO queue size 128, less than required. 00:09:44.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:44.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:44.084 Initialization complete. Launching workers. 00:09:44.084 ======================================================== 00:09:44.084 Latency(us) 00:09:44.084 Device Information : IOPS MiB/s Average min max 00:09:44.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1965.48 0.96 40335.34 1530.03 1031304.16 00:09:44.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16111.88 7.87 7924.92 2732.33 457479.61 00:09:44.084 ======================================================== 00:09:44.084 Total : 18077.36 8.83 11448.78 1530.03 1031304.16 00:09:44.084 00:09:44.342 17:18:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.342 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:44.342 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:44.598 true 00:09:44.598 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3956816 00:09:44.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3956816) - No such process 00:09:44.598 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3956816 00:09:44.599 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:44.856 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:45.114 null0 00:09:45.114 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.114 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.114 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:45.372 null1 00:09:45.372 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.372 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.372 17:18:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:45.629 null2 00:09:45.629 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.629 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.629 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:45.629 null3 00:09:45.630 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.630 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.630 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:45.888 null4 00:09:45.888 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:45.888 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:45.888 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:46.146 null5 00:09:46.146 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.146 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.147 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:46.147 null6 00:09:46.147 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.147 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.147 17:18:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:46.405 null7 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:46.405 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3962489 3962491 3962492 3962494 3962496 3962498 3962499 3962501 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.406 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.664 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.922 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.923 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.182 17:18:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.441 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.698 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:47.956 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.213 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.214 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.472 17:18:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.472 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.731 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.990 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.248 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.249 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.249 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.249 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.249 17:18:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.507 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.766 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.025 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.283 rmmod nvme_tcp 00:09:50.283 rmmod nvme_fabrics 00:09:50.283 rmmod nvme_keyring 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3956487 ']' 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3956487 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3956487 ']' 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3956487 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.283 17:18:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3956487 00:09:50.283 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:50.283 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:50.283 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3956487' 00:09:50.283 killing process with pid 3956487 00:09:50.283 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3956487 00:09:50.283 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3956487 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.542 17:18:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.076 17:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.076 00:09:53.076 real 0m46.729s 00:09:53.076 user 3m11.699s 00:09:53.076 sys 0m15.017s 00:09:53.076 17:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.076 17:18:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:53.076 ************************************ 00:09:53.076 END TEST nvmf_ns_hotplug_stress 00:09:53.076 ************************************ 00:09:53.076 17:18:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:53.076 17:18:11 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:53.076 17:18:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.076 17:18:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.076 17:18:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.076 ************************************ 00:09:53.076 START TEST nvmf_connect_stress 00:09:53.076 ************************************ 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:53.076 * Looking for test storage... 00:09:53.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.076 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.077 17:18:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.077 17:18:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:58.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.349 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:58.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:58.350 Found net devices under 0000:86:00.0: cvl_0_0 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:58.350 Found net devices under 0000:86:00.1: cvl_0_1 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:09:58.350 00:09:58.350 --- 10.0.0.2 ping statistics --- 00:09:58.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.350 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:09:58.350 00:09:58.350 --- 10.0.0.1 ping statistics --- 00:09:58.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.350 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3967099 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3967099 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3967099 ']' 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.350 17:18:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.350 [2024-07-12 17:18:16.783605] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:09:58.350 [2024-07-12 17:18:16.783650] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.350 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.350 [2024-07-12 17:18:16.842644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.350 [2024-07-12 17:18:16.922383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.350 [2024-07-12 17:18:16.922423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.350 [2024-07-12 17:18:16.922430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.350 [2024-07-12 17:18:16.922436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.350 [2024-07-12 17:18:16.922441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.350 [2024-07-12 17:18:16.922538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.350 [2024-07-12 17:18:16.922640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.350 [2024-07-12 17:18:16.922642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.919 [2024-07-12 17:18:17.643047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.919 [2024-07-12 17:18:17.670471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.919 NULL1 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3967280 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:58.919 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.179 17:18:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.440 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.440 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:09:59.440 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.440 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.440 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.703 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.703 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:09:59.703 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.703 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.703 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.962 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.962 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:09:59.962 17:18:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:59.962 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.962 17:18:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.543 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.543 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:00.543 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.543 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.543 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.804 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.804 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:00.804 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:00.804 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.804 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.062 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.062 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:01.062 17:18:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.062 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.062 17:18:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.321 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.321 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:01.321 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.321 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.321 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.889 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.889 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:01.889 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:01.889 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.889 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.147 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.147 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:02.147 17:18:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.147 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.147 17:18:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.405 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.405 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:02.405 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.405 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.405 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.664 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.664 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:02.664 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.664 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.664 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:02.923 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.923 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:02.923 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:02.923 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.923 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.490 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.490 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:03.490 17:18:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.490 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.490 17:18:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:03.748 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.748 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:03.748 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:03.748 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.748 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.007 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.007 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:04.007 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.007 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.007 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.266 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.266 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:04.266 17:18:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.266 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.266 17:18:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.526 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.526 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:04.526 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:04.526 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.526 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.093 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.093 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:05.093 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.093 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.093 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.352 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.352 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:05.352 17:18:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.352 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.352 17:18:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.610 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.610 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:05.610 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.610 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.610 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:05.868 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.868 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:05.868 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:05.868 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.868 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.433 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.433 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:06.433 17:18:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.433 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.433 17:18:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.691 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.691 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:06.691 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.691 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.691 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.949 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.949 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:06.949 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.949 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.949 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.207 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.207 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:07.207 17:18:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.207 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.207 17:18:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.465 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.465 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:07.465 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.465 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.465 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.031 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.031 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:08.031 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.031 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.031 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.289 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.289 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:08.289 17:18:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.289 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.289 17:18:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.547 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.547 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:08.547 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.547 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.547 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.806 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.806 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:08.806 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.807 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.807 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.064 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:09.064 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.064 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3967280 00:10:09.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3967280) - No such process 00:10:09.064 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3967280 00:10:09.064 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.322 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.323 rmmod nvme_tcp 00:10:09.323 rmmod nvme_fabrics 00:10:09.323 rmmod nvme_keyring 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3967099 ']' 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3967099 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3967099 ']' 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3967099 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3967099 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3967099' 00:10:09.323 killing process with pid 3967099 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3967099 00:10:09.323 17:18:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3967099 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.582 17:18:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.486 17:18:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.486 00:10:11.486 real 0m18.874s 00:10:11.486 user 0m41.164s 00:10:11.486 sys 0m7.853s 00:10:11.486 17:18:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.486 17:18:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.486 ************************************ 00:10:11.486 END TEST nvmf_connect_stress 00:10:11.486 ************************************ 00:10:11.486 17:18:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:11.486 17:18:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:11.486 17:18:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.486 17:18:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.486 17:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.746 ************************************ 00:10:11.746 START TEST nvmf_fused_ordering 00:10:11.746 ************************************ 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:11.746 * Looking for test storage... 00:10:11.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.746 17:18:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:17.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:17.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:17.026 Found net devices under 0000:86:00.0: cvl_0_0 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:17.026 Found net devices under 0000:86:00.1: cvl_0_1 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:10:17.026 00:10:17.026 --- 10.0.0.2 ping statistics --- 00:10:17.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.026 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:17.026 00:10:17.026 --- 10.0.0.1 ping statistics --- 00:10:17.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.026 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.026 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3972429 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3972429 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3972429 ']' 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.027 17:18:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.027 [2024-07-12 17:18:35.408010] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:10:17.027 [2024-07-12 17:18:35.408052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.027 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.027 [2024-07-12 17:18:35.464607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.027 [2024-07-12 17:18:35.543654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.027 [2024-07-12 17:18:35.543687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.027 [2024-07-12 17:18:35.543697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.027 [2024-07-12 17:18:35.543704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.027 [2024-07-12 17:18:35.543709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.027 [2024-07-12 17:18:35.543725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.595 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.595 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:17.595 17:18:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.595 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.595 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 [2024-07-12 17:18:36.250921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 [2024-07-12 17:18:36.271060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 NULL1 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.596 17:18:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:17.596 [2024-07-12 17:18:36.325473] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:10:17.596 [2024-07-12 17:18:36.325511] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3972619 ] 00:10:17.596 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.162 Attached to nqn.2016-06.io.spdk:cnode1 00:10:18.162 Namespace ID: 1 size: 1GB 00:10:18.162 fused_ordering(0) 00:10:18.162 fused_ordering(1) 00:10:18.162 fused_ordering(2) 00:10:18.162 fused_ordering(3) 00:10:18.162 fused_ordering(4) 00:10:18.162 fused_ordering(5) 00:10:18.162 fused_ordering(6) 00:10:18.162 fused_ordering(7) 00:10:18.162 fused_ordering(8) 00:10:18.162 fused_ordering(9) 00:10:18.162 fused_ordering(10) 00:10:18.162 fused_ordering(11) 00:10:18.162 fused_ordering(12) 00:10:18.162 fused_ordering(13) 00:10:18.162 fused_ordering(14) 00:10:18.162 fused_ordering(15) 00:10:18.162 fused_ordering(16) 00:10:18.162 fused_ordering(17) 00:10:18.162 fused_ordering(18) 00:10:18.162 fused_ordering(19) 00:10:18.162 fused_ordering(20) 00:10:18.162 fused_ordering(21) 00:10:18.162 fused_ordering(22) 00:10:18.162 fused_ordering(23) 00:10:18.162 fused_ordering(24) 00:10:18.162 fused_ordering(25) 00:10:18.162 fused_ordering(26) 00:10:18.162 fused_ordering(27) 00:10:18.162 fused_ordering(28) 00:10:18.162 fused_ordering(29) 00:10:18.162 fused_ordering(30) 00:10:18.162 fused_ordering(31) 00:10:18.162 fused_ordering(32) 00:10:18.162 fused_ordering(33) 00:10:18.162 fused_ordering(34) 00:10:18.162 fused_ordering(35) 00:10:18.162 fused_ordering(36) 00:10:18.162 fused_ordering(37) 00:10:18.162 fused_ordering(38) 00:10:18.162 fused_ordering(39) 00:10:18.162 fused_ordering(40) 00:10:18.162 fused_ordering(41) 00:10:18.162 fused_ordering(42) 00:10:18.162 fused_ordering(43) 00:10:18.162 fused_ordering(44) 00:10:18.162 fused_ordering(45) 00:10:18.162 fused_ordering(46) 00:10:18.162 fused_ordering(47) 00:10:18.162 fused_ordering(48) 00:10:18.162 fused_ordering(49) 00:10:18.162 fused_ordering(50) 00:10:18.162 fused_ordering(51) 00:10:18.162 fused_ordering(52) 00:10:18.162 fused_ordering(53) 00:10:18.162 fused_ordering(54) 00:10:18.162 fused_ordering(55) 00:10:18.162 fused_ordering(56) 00:10:18.162 fused_ordering(57) 00:10:18.162 fused_ordering(58) 00:10:18.162 fused_ordering(59) 00:10:18.162 fused_ordering(60) 00:10:18.162 fused_ordering(61) 00:10:18.162 fused_ordering(62) 00:10:18.162 fused_ordering(63) 00:10:18.162 fused_ordering(64) 00:10:18.162 fused_ordering(65) 00:10:18.162 fused_ordering(66) 00:10:18.162 fused_ordering(67) 00:10:18.162 fused_ordering(68) 00:10:18.162 fused_ordering(69) 00:10:18.162 fused_ordering(70) 00:10:18.162 fused_ordering(71) 00:10:18.162 fused_ordering(72) 00:10:18.162 fused_ordering(73) 00:10:18.162 fused_ordering(74) 00:10:18.162 fused_ordering(75) 00:10:18.162 fused_ordering(76) 00:10:18.162 fused_ordering(77) 00:10:18.162 fused_ordering(78) 00:10:18.162 fused_ordering(79) 00:10:18.162 fused_ordering(80) 00:10:18.162 fused_ordering(81) 00:10:18.162 fused_ordering(82) 00:10:18.162 fused_ordering(83) 00:10:18.162 fused_ordering(84) 00:10:18.162 fused_ordering(85) 00:10:18.162 fused_ordering(86) 00:10:18.162 fused_ordering(87) 00:10:18.162 fused_ordering(88) 00:10:18.162 fused_ordering(89) 00:10:18.162 fused_ordering(90) 00:10:18.162 fused_ordering(91) 00:10:18.162 fused_ordering(92) 00:10:18.162 fused_ordering(93) 00:10:18.162 fused_ordering(94) 00:10:18.162 fused_ordering(95) 00:10:18.162 fused_ordering(96) 00:10:18.162 fused_ordering(97) 00:10:18.162 fused_ordering(98) 00:10:18.162 fused_ordering(99) 00:10:18.162 fused_ordering(100) 00:10:18.162 fused_ordering(101) 00:10:18.162 fused_ordering(102) 00:10:18.162 fused_ordering(103) 00:10:18.162 fused_ordering(104) 00:10:18.162 fused_ordering(105) 00:10:18.162 fused_ordering(106) 00:10:18.162 fused_ordering(107) 00:10:18.162 fused_ordering(108) 00:10:18.162 fused_ordering(109) 00:10:18.162 fused_ordering(110) 00:10:18.162 fused_ordering(111) 00:10:18.162 fused_ordering(112) 00:10:18.162 fused_ordering(113) 00:10:18.162 fused_ordering(114) 00:10:18.162 fused_ordering(115) 00:10:18.162 fused_ordering(116) 00:10:18.163 fused_ordering(117) 00:10:18.163 fused_ordering(118) 00:10:18.163 fused_ordering(119) 00:10:18.163 fused_ordering(120) 00:10:18.163 fused_ordering(121) 00:10:18.163 fused_ordering(122) 00:10:18.163 fused_ordering(123) 00:10:18.163 fused_ordering(124) 00:10:18.163 fused_ordering(125) 00:10:18.163 fused_ordering(126) 00:10:18.163 fused_ordering(127) 00:10:18.163 fused_ordering(128) 00:10:18.163 fused_ordering(129) 00:10:18.163 fused_ordering(130) 00:10:18.163 fused_ordering(131) 00:10:18.163 fused_ordering(132) 00:10:18.163 fused_ordering(133) 00:10:18.163 fused_ordering(134) 00:10:18.163 fused_ordering(135) 00:10:18.163 fused_ordering(136) 00:10:18.163 fused_ordering(137) 00:10:18.163 fused_ordering(138) 00:10:18.163 fused_ordering(139) 00:10:18.163 fused_ordering(140) 00:10:18.163 fused_ordering(141) 00:10:18.163 fused_ordering(142) 00:10:18.163 fused_ordering(143) 00:10:18.163 fused_ordering(144) 00:10:18.163 fused_ordering(145) 00:10:18.163 fused_ordering(146) 00:10:18.163 fused_ordering(147) 00:10:18.163 fused_ordering(148) 00:10:18.163 fused_ordering(149) 00:10:18.163 fused_ordering(150) 00:10:18.163 fused_ordering(151) 00:10:18.163 fused_ordering(152) 00:10:18.163 fused_ordering(153) 00:10:18.163 fused_ordering(154) 00:10:18.163 fused_ordering(155) 00:10:18.163 fused_ordering(156) 00:10:18.163 fused_ordering(157) 00:10:18.163 fused_ordering(158) 00:10:18.163 fused_ordering(159) 00:10:18.163 fused_ordering(160) 00:10:18.163 fused_ordering(161) 00:10:18.163 fused_ordering(162) 00:10:18.163 fused_ordering(163) 00:10:18.163 fused_ordering(164) 00:10:18.163 fused_ordering(165) 00:10:18.163 fused_ordering(166) 00:10:18.163 fused_ordering(167) 00:10:18.163 fused_ordering(168) 00:10:18.163 fused_ordering(169) 00:10:18.163 fused_ordering(170) 00:10:18.163 fused_ordering(171) 00:10:18.163 fused_ordering(172) 00:10:18.163 fused_ordering(173) 00:10:18.163 fused_ordering(174) 00:10:18.163 fused_ordering(175) 00:10:18.163 fused_ordering(176) 00:10:18.163 fused_ordering(177) 00:10:18.163 fused_ordering(178) 00:10:18.163 fused_ordering(179) 00:10:18.163 fused_ordering(180) 00:10:18.163 fused_ordering(181) 00:10:18.163 fused_ordering(182) 00:10:18.163 fused_ordering(183) 00:10:18.163 fused_ordering(184) 00:10:18.163 fused_ordering(185) 00:10:18.163 fused_ordering(186) 00:10:18.163 fused_ordering(187) 00:10:18.163 fused_ordering(188) 00:10:18.163 fused_ordering(189) 00:10:18.163 fused_ordering(190) 00:10:18.163 fused_ordering(191) 00:10:18.163 fused_ordering(192) 00:10:18.163 fused_ordering(193) 00:10:18.163 fused_ordering(194) 00:10:18.163 fused_ordering(195) 00:10:18.163 fused_ordering(196) 00:10:18.163 fused_ordering(197) 00:10:18.163 fused_ordering(198) 00:10:18.163 fused_ordering(199) 00:10:18.163 fused_ordering(200) 00:10:18.163 fused_ordering(201) 00:10:18.163 fused_ordering(202) 00:10:18.163 fused_ordering(203) 00:10:18.163 fused_ordering(204) 00:10:18.163 fused_ordering(205) 00:10:18.422 fused_ordering(206) 00:10:18.422 fused_ordering(207) 00:10:18.422 fused_ordering(208) 00:10:18.422 fused_ordering(209) 00:10:18.422 fused_ordering(210) 00:10:18.422 fused_ordering(211) 00:10:18.422 fused_ordering(212) 00:10:18.422 fused_ordering(213) 00:10:18.422 fused_ordering(214) 00:10:18.422 fused_ordering(215) 00:10:18.422 fused_ordering(216) 00:10:18.422 fused_ordering(217) 00:10:18.422 fused_ordering(218) 00:10:18.422 fused_ordering(219) 00:10:18.422 fused_ordering(220) 00:10:18.422 fused_ordering(221) 00:10:18.422 fused_ordering(222) 00:10:18.422 fused_ordering(223) 00:10:18.422 fused_ordering(224) 00:10:18.422 fused_ordering(225) 00:10:18.422 fused_ordering(226) 00:10:18.422 fused_ordering(227) 00:10:18.422 fused_ordering(228) 00:10:18.422 fused_ordering(229) 00:10:18.422 fused_ordering(230) 00:10:18.422 fused_ordering(231) 00:10:18.422 fused_ordering(232) 00:10:18.422 fused_ordering(233) 00:10:18.422 fused_ordering(234) 00:10:18.422 fused_ordering(235) 00:10:18.422 fused_ordering(236) 00:10:18.422 fused_ordering(237) 00:10:18.422 fused_ordering(238) 00:10:18.422 fused_ordering(239) 00:10:18.422 fused_ordering(240) 00:10:18.422 fused_ordering(241) 00:10:18.422 fused_ordering(242) 00:10:18.422 fused_ordering(243) 00:10:18.422 fused_ordering(244) 00:10:18.422 fused_ordering(245) 00:10:18.422 fused_ordering(246) 00:10:18.422 fused_ordering(247) 00:10:18.422 fused_ordering(248) 00:10:18.422 fused_ordering(249) 00:10:18.422 fused_ordering(250) 00:10:18.422 fused_ordering(251) 00:10:18.422 fused_ordering(252) 00:10:18.422 fused_ordering(253) 00:10:18.422 fused_ordering(254) 00:10:18.422 fused_ordering(255) 00:10:18.422 fused_ordering(256) 00:10:18.422 fused_ordering(257) 00:10:18.422 fused_ordering(258) 00:10:18.422 fused_ordering(259) 00:10:18.422 fused_ordering(260) 00:10:18.422 fused_ordering(261) 00:10:18.422 fused_ordering(262) 00:10:18.422 fused_ordering(263) 00:10:18.422 fused_ordering(264) 00:10:18.422 fused_ordering(265) 00:10:18.422 fused_ordering(266) 00:10:18.422 fused_ordering(267) 00:10:18.422 fused_ordering(268) 00:10:18.422 fused_ordering(269) 00:10:18.422 fused_ordering(270) 00:10:18.422 fused_ordering(271) 00:10:18.422 fused_ordering(272) 00:10:18.422 fused_ordering(273) 00:10:18.422 fused_ordering(274) 00:10:18.422 fused_ordering(275) 00:10:18.422 fused_ordering(276) 00:10:18.422 fused_ordering(277) 00:10:18.422 fused_ordering(278) 00:10:18.422 fused_ordering(279) 00:10:18.422 fused_ordering(280) 00:10:18.422 fused_ordering(281) 00:10:18.422 fused_ordering(282) 00:10:18.422 fused_ordering(283) 00:10:18.422 fused_ordering(284) 00:10:18.422 fused_ordering(285) 00:10:18.422 fused_ordering(286) 00:10:18.422 fused_ordering(287) 00:10:18.422 fused_ordering(288) 00:10:18.422 fused_ordering(289) 00:10:18.422 fused_ordering(290) 00:10:18.422 fused_ordering(291) 00:10:18.422 fused_ordering(292) 00:10:18.422 fused_ordering(293) 00:10:18.422 fused_ordering(294) 00:10:18.422 fused_ordering(295) 00:10:18.422 fused_ordering(296) 00:10:18.422 fused_ordering(297) 00:10:18.422 fused_ordering(298) 00:10:18.422 fused_ordering(299) 00:10:18.422 fused_ordering(300) 00:10:18.422 fused_ordering(301) 00:10:18.422 fused_ordering(302) 00:10:18.422 fused_ordering(303) 00:10:18.422 fused_ordering(304) 00:10:18.422 fused_ordering(305) 00:10:18.422 fused_ordering(306) 00:10:18.422 fused_ordering(307) 00:10:18.422 fused_ordering(308) 00:10:18.422 fused_ordering(309) 00:10:18.422 fused_ordering(310) 00:10:18.422 fused_ordering(311) 00:10:18.422 fused_ordering(312) 00:10:18.422 fused_ordering(313) 00:10:18.422 fused_ordering(314) 00:10:18.422 fused_ordering(315) 00:10:18.422 fused_ordering(316) 00:10:18.422 fused_ordering(317) 00:10:18.422 fused_ordering(318) 00:10:18.422 fused_ordering(319) 00:10:18.422 fused_ordering(320) 00:10:18.422 fused_ordering(321) 00:10:18.422 fused_ordering(322) 00:10:18.422 fused_ordering(323) 00:10:18.422 fused_ordering(324) 00:10:18.422 fused_ordering(325) 00:10:18.422 fused_ordering(326) 00:10:18.422 fused_ordering(327) 00:10:18.422 fused_ordering(328) 00:10:18.422 fused_ordering(329) 00:10:18.422 fused_ordering(330) 00:10:18.422 fused_ordering(331) 00:10:18.422 fused_ordering(332) 00:10:18.422 fused_ordering(333) 00:10:18.422 fused_ordering(334) 00:10:18.422 fused_ordering(335) 00:10:18.422 fused_ordering(336) 00:10:18.422 fused_ordering(337) 00:10:18.422 fused_ordering(338) 00:10:18.422 fused_ordering(339) 00:10:18.422 fused_ordering(340) 00:10:18.422 fused_ordering(341) 00:10:18.422 fused_ordering(342) 00:10:18.422 fused_ordering(343) 00:10:18.422 fused_ordering(344) 00:10:18.422 fused_ordering(345) 00:10:18.422 fused_ordering(346) 00:10:18.422 fused_ordering(347) 00:10:18.422 fused_ordering(348) 00:10:18.422 fused_ordering(349) 00:10:18.422 fused_ordering(350) 00:10:18.422 fused_ordering(351) 00:10:18.422 fused_ordering(352) 00:10:18.422 fused_ordering(353) 00:10:18.422 fused_ordering(354) 00:10:18.422 fused_ordering(355) 00:10:18.422 fused_ordering(356) 00:10:18.422 fused_ordering(357) 00:10:18.422 fused_ordering(358) 00:10:18.422 fused_ordering(359) 00:10:18.422 fused_ordering(360) 00:10:18.422 fused_ordering(361) 00:10:18.422 fused_ordering(362) 00:10:18.422 fused_ordering(363) 00:10:18.422 fused_ordering(364) 00:10:18.422 fused_ordering(365) 00:10:18.422 fused_ordering(366) 00:10:18.422 fused_ordering(367) 00:10:18.422 fused_ordering(368) 00:10:18.422 fused_ordering(369) 00:10:18.422 fused_ordering(370) 00:10:18.422 fused_ordering(371) 00:10:18.422 fused_ordering(372) 00:10:18.422 fused_ordering(373) 00:10:18.422 fused_ordering(374) 00:10:18.422 fused_ordering(375) 00:10:18.422 fused_ordering(376) 00:10:18.422 fused_ordering(377) 00:10:18.422 fused_ordering(378) 00:10:18.422 fused_ordering(379) 00:10:18.422 fused_ordering(380) 00:10:18.422 fused_ordering(381) 00:10:18.422 fused_ordering(382) 00:10:18.422 fused_ordering(383) 00:10:18.422 fused_ordering(384) 00:10:18.422 fused_ordering(385) 00:10:18.422 fused_ordering(386) 00:10:18.422 fused_ordering(387) 00:10:18.422 fused_ordering(388) 00:10:18.422 fused_ordering(389) 00:10:18.422 fused_ordering(390) 00:10:18.422 fused_ordering(391) 00:10:18.422 fused_ordering(392) 00:10:18.422 fused_ordering(393) 00:10:18.422 fused_ordering(394) 00:10:18.422 fused_ordering(395) 00:10:18.422 fused_ordering(396) 00:10:18.422 fused_ordering(397) 00:10:18.422 fused_ordering(398) 00:10:18.422 fused_ordering(399) 00:10:18.422 fused_ordering(400) 00:10:18.422 fused_ordering(401) 00:10:18.422 fused_ordering(402) 00:10:18.422 fused_ordering(403) 00:10:18.422 fused_ordering(404) 00:10:18.422 fused_ordering(405) 00:10:18.422 fused_ordering(406) 00:10:18.422 fused_ordering(407) 00:10:18.422 fused_ordering(408) 00:10:18.423 fused_ordering(409) 00:10:18.423 fused_ordering(410) 00:10:18.682 fused_ordering(411) 00:10:18.682 fused_ordering(412) 00:10:18.682 fused_ordering(413) 00:10:18.682 fused_ordering(414) 00:10:18.682 fused_ordering(415) 00:10:18.682 fused_ordering(416) 00:10:18.682 fused_ordering(417) 00:10:18.682 fused_ordering(418) 00:10:18.682 fused_ordering(419) 00:10:18.682 fused_ordering(420) 00:10:18.682 fused_ordering(421) 00:10:18.682 fused_ordering(422) 00:10:18.682 fused_ordering(423) 00:10:18.682 fused_ordering(424) 00:10:18.682 fused_ordering(425) 00:10:18.682 fused_ordering(426) 00:10:18.682 fused_ordering(427) 00:10:18.682 fused_ordering(428) 00:10:18.682 fused_ordering(429) 00:10:18.682 fused_ordering(430) 00:10:18.682 fused_ordering(431) 00:10:18.682 fused_ordering(432) 00:10:18.682 fused_ordering(433) 00:10:18.682 fused_ordering(434) 00:10:18.682 fused_ordering(435) 00:10:18.682 fused_ordering(436) 00:10:18.682 fused_ordering(437) 00:10:18.682 fused_ordering(438) 00:10:18.682 fused_ordering(439) 00:10:18.682 fused_ordering(440) 00:10:18.682 fused_ordering(441) 00:10:18.682 fused_ordering(442) 00:10:18.682 fused_ordering(443) 00:10:18.682 fused_ordering(444) 00:10:18.682 fused_ordering(445) 00:10:18.682 fused_ordering(446) 00:10:18.682 fused_ordering(447) 00:10:18.682 fused_ordering(448) 00:10:18.682 fused_ordering(449) 00:10:18.682 fused_ordering(450) 00:10:18.682 fused_ordering(451) 00:10:18.682 fused_ordering(452) 00:10:18.682 fused_ordering(453) 00:10:18.682 fused_ordering(454) 00:10:18.682 fused_ordering(455) 00:10:18.682 fused_ordering(456) 00:10:18.682 fused_ordering(457) 00:10:18.682 fused_ordering(458) 00:10:18.682 fused_ordering(459) 00:10:18.682 fused_ordering(460) 00:10:18.682 fused_ordering(461) 00:10:18.682 fused_ordering(462) 00:10:18.682 fused_ordering(463) 00:10:18.682 fused_ordering(464) 00:10:18.682 fused_ordering(465) 00:10:18.682 fused_ordering(466) 00:10:18.682 fused_ordering(467) 00:10:18.682 fused_ordering(468) 00:10:18.682 fused_ordering(469) 00:10:18.682 fused_ordering(470) 00:10:18.682 fused_ordering(471) 00:10:18.682 fused_ordering(472) 00:10:18.682 fused_ordering(473) 00:10:18.682 fused_ordering(474) 00:10:18.682 fused_ordering(475) 00:10:18.682 fused_ordering(476) 00:10:18.682 fused_ordering(477) 00:10:18.682 fused_ordering(478) 00:10:18.682 fused_ordering(479) 00:10:18.682 fused_ordering(480) 00:10:18.682 fused_ordering(481) 00:10:18.682 fused_ordering(482) 00:10:18.682 fused_ordering(483) 00:10:18.682 fused_ordering(484) 00:10:18.682 fused_ordering(485) 00:10:18.682 fused_ordering(486) 00:10:18.682 fused_ordering(487) 00:10:18.682 fused_ordering(488) 00:10:18.682 fused_ordering(489) 00:10:18.682 fused_ordering(490) 00:10:18.682 fused_ordering(491) 00:10:18.682 fused_ordering(492) 00:10:18.682 fused_ordering(493) 00:10:18.682 fused_ordering(494) 00:10:18.682 fused_ordering(495) 00:10:18.682 fused_ordering(496) 00:10:18.682 fused_ordering(497) 00:10:18.682 fused_ordering(498) 00:10:18.682 fused_ordering(499) 00:10:18.682 fused_ordering(500) 00:10:18.682 fused_ordering(501) 00:10:18.682 fused_ordering(502) 00:10:18.682 fused_ordering(503) 00:10:18.682 fused_ordering(504) 00:10:18.682 fused_ordering(505) 00:10:18.682 fused_ordering(506) 00:10:18.682 fused_ordering(507) 00:10:18.682 fused_ordering(508) 00:10:18.682 fused_ordering(509) 00:10:18.682 fused_ordering(510) 00:10:18.682 fused_ordering(511) 00:10:18.682 fused_ordering(512) 00:10:18.682 fused_ordering(513) 00:10:18.682 fused_ordering(514) 00:10:18.682 fused_ordering(515) 00:10:18.682 fused_ordering(516) 00:10:18.682 fused_ordering(517) 00:10:18.682 fused_ordering(518) 00:10:18.682 fused_ordering(519) 00:10:18.682 fused_ordering(520) 00:10:18.682 fused_ordering(521) 00:10:18.682 fused_ordering(522) 00:10:18.682 fused_ordering(523) 00:10:18.682 fused_ordering(524) 00:10:18.682 fused_ordering(525) 00:10:18.682 fused_ordering(526) 00:10:18.682 fused_ordering(527) 00:10:18.682 fused_ordering(528) 00:10:18.682 fused_ordering(529) 00:10:18.682 fused_ordering(530) 00:10:18.682 fused_ordering(531) 00:10:18.682 fused_ordering(532) 00:10:18.682 fused_ordering(533) 00:10:18.682 fused_ordering(534) 00:10:18.682 fused_ordering(535) 00:10:18.682 fused_ordering(536) 00:10:18.682 fused_ordering(537) 00:10:18.682 fused_ordering(538) 00:10:18.682 fused_ordering(539) 00:10:18.682 fused_ordering(540) 00:10:18.682 fused_ordering(541) 00:10:18.682 fused_ordering(542) 00:10:18.682 fused_ordering(543) 00:10:18.682 fused_ordering(544) 00:10:18.682 fused_ordering(545) 00:10:18.682 fused_ordering(546) 00:10:18.682 fused_ordering(547) 00:10:18.682 fused_ordering(548) 00:10:18.682 fused_ordering(549) 00:10:18.682 fused_ordering(550) 00:10:18.682 fused_ordering(551) 00:10:18.682 fused_ordering(552) 00:10:18.682 fused_ordering(553) 00:10:18.682 fused_ordering(554) 00:10:18.682 fused_ordering(555) 00:10:18.682 fused_ordering(556) 00:10:18.682 fused_ordering(557) 00:10:18.682 fused_ordering(558) 00:10:18.682 fused_ordering(559) 00:10:18.682 fused_ordering(560) 00:10:18.682 fused_ordering(561) 00:10:18.682 fused_ordering(562) 00:10:18.682 fused_ordering(563) 00:10:18.682 fused_ordering(564) 00:10:18.682 fused_ordering(565) 00:10:18.682 fused_ordering(566) 00:10:18.682 fused_ordering(567) 00:10:18.682 fused_ordering(568) 00:10:18.682 fused_ordering(569) 00:10:18.682 fused_ordering(570) 00:10:18.682 fused_ordering(571) 00:10:18.682 fused_ordering(572) 00:10:18.682 fused_ordering(573) 00:10:18.682 fused_ordering(574) 00:10:18.682 fused_ordering(575) 00:10:18.682 fused_ordering(576) 00:10:18.682 fused_ordering(577) 00:10:18.682 fused_ordering(578) 00:10:18.682 fused_ordering(579) 00:10:18.682 fused_ordering(580) 00:10:18.682 fused_ordering(581) 00:10:18.682 fused_ordering(582) 00:10:18.682 fused_ordering(583) 00:10:18.682 fused_ordering(584) 00:10:18.682 fused_ordering(585) 00:10:18.682 fused_ordering(586) 00:10:18.682 fused_ordering(587) 00:10:18.682 fused_ordering(588) 00:10:18.682 fused_ordering(589) 00:10:18.682 fused_ordering(590) 00:10:18.682 fused_ordering(591) 00:10:18.682 fused_ordering(592) 00:10:18.682 fused_ordering(593) 00:10:18.682 fused_ordering(594) 00:10:18.682 fused_ordering(595) 00:10:18.682 fused_ordering(596) 00:10:18.682 fused_ordering(597) 00:10:18.682 fused_ordering(598) 00:10:18.682 fused_ordering(599) 00:10:18.682 fused_ordering(600) 00:10:18.682 fused_ordering(601) 00:10:18.682 fused_ordering(602) 00:10:18.682 fused_ordering(603) 00:10:18.682 fused_ordering(604) 00:10:18.682 fused_ordering(605) 00:10:18.682 fused_ordering(606) 00:10:18.682 fused_ordering(607) 00:10:18.682 fused_ordering(608) 00:10:18.682 fused_ordering(609) 00:10:18.682 fused_ordering(610) 00:10:18.682 fused_ordering(611) 00:10:18.682 fused_ordering(612) 00:10:18.682 fused_ordering(613) 00:10:18.682 fused_ordering(614) 00:10:18.682 fused_ordering(615) 00:10:19.249 fused_ordering(616) 00:10:19.249 fused_ordering(617) 00:10:19.249 fused_ordering(618) 00:10:19.249 fused_ordering(619) 00:10:19.249 fused_ordering(620) 00:10:19.249 fused_ordering(621) 00:10:19.249 fused_ordering(622) 00:10:19.249 fused_ordering(623) 00:10:19.249 fused_ordering(624) 00:10:19.249 fused_ordering(625) 00:10:19.249 fused_ordering(626) 00:10:19.249 fused_ordering(627) 00:10:19.249 fused_ordering(628) 00:10:19.249 fused_ordering(629) 00:10:19.249 fused_ordering(630) 00:10:19.249 fused_ordering(631) 00:10:19.249 fused_ordering(632) 00:10:19.249 fused_ordering(633) 00:10:19.249 fused_ordering(634) 00:10:19.249 fused_ordering(635) 00:10:19.249 fused_ordering(636) 00:10:19.249 fused_ordering(637) 00:10:19.249 fused_ordering(638) 00:10:19.249 fused_ordering(639) 00:10:19.249 fused_ordering(640) 00:10:19.249 fused_ordering(641) 00:10:19.249 fused_ordering(642) 00:10:19.249 fused_ordering(643) 00:10:19.249 fused_ordering(644) 00:10:19.249 fused_ordering(645) 00:10:19.249 fused_ordering(646) 00:10:19.249 fused_ordering(647) 00:10:19.249 fused_ordering(648) 00:10:19.249 fused_ordering(649) 00:10:19.249 fused_ordering(650) 00:10:19.249 fused_ordering(651) 00:10:19.249 fused_ordering(652) 00:10:19.249 fused_ordering(653) 00:10:19.249 fused_ordering(654) 00:10:19.249 fused_ordering(655) 00:10:19.249 fused_ordering(656) 00:10:19.249 fused_ordering(657) 00:10:19.249 fused_ordering(658) 00:10:19.249 fused_ordering(659) 00:10:19.249 fused_ordering(660) 00:10:19.249 fused_ordering(661) 00:10:19.249 fused_ordering(662) 00:10:19.249 fused_ordering(663) 00:10:19.249 fused_ordering(664) 00:10:19.249 fused_ordering(665) 00:10:19.249 fused_ordering(666) 00:10:19.249 fused_ordering(667) 00:10:19.249 fused_ordering(668) 00:10:19.249 fused_ordering(669) 00:10:19.249 fused_ordering(670) 00:10:19.249 fused_ordering(671) 00:10:19.249 fused_ordering(672) 00:10:19.249 fused_ordering(673) 00:10:19.249 fused_ordering(674) 00:10:19.249 fused_ordering(675) 00:10:19.249 fused_ordering(676) 00:10:19.249 fused_ordering(677) 00:10:19.249 fused_ordering(678) 00:10:19.249 fused_ordering(679) 00:10:19.249 fused_ordering(680) 00:10:19.249 fused_ordering(681) 00:10:19.249 fused_ordering(682) 00:10:19.249 fused_ordering(683) 00:10:19.249 fused_ordering(684) 00:10:19.249 fused_ordering(685) 00:10:19.249 fused_ordering(686) 00:10:19.249 fused_ordering(687) 00:10:19.249 fused_ordering(688) 00:10:19.249 fused_ordering(689) 00:10:19.249 fused_ordering(690) 00:10:19.249 fused_ordering(691) 00:10:19.249 fused_ordering(692) 00:10:19.249 fused_ordering(693) 00:10:19.249 fused_ordering(694) 00:10:19.249 fused_ordering(695) 00:10:19.249 fused_ordering(696) 00:10:19.249 fused_ordering(697) 00:10:19.249 fused_ordering(698) 00:10:19.249 fused_ordering(699) 00:10:19.249 fused_ordering(700) 00:10:19.249 fused_ordering(701) 00:10:19.249 fused_ordering(702) 00:10:19.249 fused_ordering(703) 00:10:19.249 fused_ordering(704) 00:10:19.249 fused_ordering(705) 00:10:19.249 fused_ordering(706) 00:10:19.249 fused_ordering(707) 00:10:19.249 fused_ordering(708) 00:10:19.249 fused_ordering(709) 00:10:19.249 fused_ordering(710) 00:10:19.249 fused_ordering(711) 00:10:19.249 fused_ordering(712) 00:10:19.249 fused_ordering(713) 00:10:19.249 fused_ordering(714) 00:10:19.249 fused_ordering(715) 00:10:19.249 fused_ordering(716) 00:10:19.249 fused_ordering(717) 00:10:19.249 fused_ordering(718) 00:10:19.249 fused_ordering(719) 00:10:19.249 fused_ordering(720) 00:10:19.249 fused_ordering(721) 00:10:19.249 fused_ordering(722) 00:10:19.249 fused_ordering(723) 00:10:19.249 fused_ordering(724) 00:10:19.249 fused_ordering(725) 00:10:19.249 fused_ordering(726) 00:10:19.249 fused_ordering(727) 00:10:19.249 fused_ordering(728) 00:10:19.249 fused_ordering(729) 00:10:19.249 fused_ordering(730) 00:10:19.249 fused_ordering(731) 00:10:19.249 fused_ordering(732) 00:10:19.249 fused_ordering(733) 00:10:19.249 fused_ordering(734) 00:10:19.249 fused_ordering(735) 00:10:19.249 fused_ordering(736) 00:10:19.249 fused_ordering(737) 00:10:19.249 fused_ordering(738) 00:10:19.249 fused_ordering(739) 00:10:19.249 fused_ordering(740) 00:10:19.249 fused_ordering(741) 00:10:19.249 fused_ordering(742) 00:10:19.249 fused_ordering(743) 00:10:19.249 fused_ordering(744) 00:10:19.249 fused_ordering(745) 00:10:19.249 fused_ordering(746) 00:10:19.249 fused_ordering(747) 00:10:19.249 fused_ordering(748) 00:10:19.249 fused_ordering(749) 00:10:19.249 fused_ordering(750) 00:10:19.249 fused_ordering(751) 00:10:19.249 fused_ordering(752) 00:10:19.249 fused_ordering(753) 00:10:19.249 fused_ordering(754) 00:10:19.249 fused_ordering(755) 00:10:19.249 fused_ordering(756) 00:10:19.249 fused_ordering(757) 00:10:19.249 fused_ordering(758) 00:10:19.249 fused_ordering(759) 00:10:19.249 fused_ordering(760) 00:10:19.249 fused_ordering(761) 00:10:19.249 fused_ordering(762) 00:10:19.249 fused_ordering(763) 00:10:19.249 fused_ordering(764) 00:10:19.249 fused_ordering(765) 00:10:19.249 fused_ordering(766) 00:10:19.249 fused_ordering(767) 00:10:19.249 fused_ordering(768) 00:10:19.249 fused_ordering(769) 00:10:19.249 fused_ordering(770) 00:10:19.249 fused_ordering(771) 00:10:19.249 fused_ordering(772) 00:10:19.249 fused_ordering(773) 00:10:19.249 fused_ordering(774) 00:10:19.249 fused_ordering(775) 00:10:19.249 fused_ordering(776) 00:10:19.249 fused_ordering(777) 00:10:19.249 fused_ordering(778) 00:10:19.249 fused_ordering(779) 00:10:19.249 fused_ordering(780) 00:10:19.249 fused_ordering(781) 00:10:19.249 fused_ordering(782) 00:10:19.249 fused_ordering(783) 00:10:19.249 fused_ordering(784) 00:10:19.249 fused_ordering(785) 00:10:19.249 fused_ordering(786) 00:10:19.249 fused_ordering(787) 00:10:19.249 fused_ordering(788) 00:10:19.249 fused_ordering(789) 00:10:19.249 fused_ordering(790) 00:10:19.249 fused_ordering(791) 00:10:19.249 fused_ordering(792) 00:10:19.249 fused_ordering(793) 00:10:19.249 fused_ordering(794) 00:10:19.249 fused_ordering(795) 00:10:19.249 fused_ordering(796) 00:10:19.249 fused_ordering(797) 00:10:19.249 fused_ordering(798) 00:10:19.249 fused_ordering(799) 00:10:19.249 fused_ordering(800) 00:10:19.249 fused_ordering(801) 00:10:19.249 fused_ordering(802) 00:10:19.249 fused_ordering(803) 00:10:19.249 fused_ordering(804) 00:10:19.249 fused_ordering(805) 00:10:19.249 fused_ordering(806) 00:10:19.249 fused_ordering(807) 00:10:19.249 fused_ordering(808) 00:10:19.249 fused_ordering(809) 00:10:19.249 fused_ordering(810) 00:10:19.249 fused_ordering(811) 00:10:19.249 fused_ordering(812) 00:10:19.249 fused_ordering(813) 00:10:19.249 fused_ordering(814) 00:10:19.249 fused_ordering(815) 00:10:19.249 fused_ordering(816) 00:10:19.249 fused_ordering(817) 00:10:19.249 fused_ordering(818) 00:10:19.249 fused_ordering(819) 00:10:19.249 fused_ordering(820) 00:10:19.508 fused_ordering(821) 00:10:19.508 fused_ordering(822) 00:10:19.508 fused_ordering(823) 00:10:19.508 fused_ordering(824) 00:10:19.508 fused_ordering(825) 00:10:19.508 fused_ordering(826) 00:10:19.508 fused_ordering(827) 00:10:19.508 fused_ordering(828) 00:10:19.508 fused_ordering(829) 00:10:19.508 fused_ordering(830) 00:10:19.508 fused_ordering(831) 00:10:19.508 fused_ordering(832) 00:10:19.508 fused_ordering(833) 00:10:19.508 fused_ordering(834) 00:10:19.508 fused_ordering(835) 00:10:19.508 fused_ordering(836) 00:10:19.508 fused_ordering(837) 00:10:19.508 fused_ordering(838) 00:10:19.508 fused_ordering(839) 00:10:19.508 fused_ordering(840) 00:10:19.508 fused_ordering(841) 00:10:19.508 fused_ordering(842) 00:10:19.508 fused_ordering(843) 00:10:19.508 fused_ordering(844) 00:10:19.508 fused_ordering(845) 00:10:19.508 fused_ordering(846) 00:10:19.508 fused_ordering(847) 00:10:19.508 fused_ordering(848) 00:10:19.508 fused_ordering(849) 00:10:19.508 fused_ordering(850) 00:10:19.508 fused_ordering(851) 00:10:19.508 fused_ordering(852) 00:10:19.508 fused_ordering(853) 00:10:19.508 fused_ordering(854) 00:10:19.508 fused_ordering(855) 00:10:19.508 fused_ordering(856) 00:10:19.508 fused_ordering(857) 00:10:19.508 fused_ordering(858) 00:10:19.508 fused_ordering(859) 00:10:19.508 fused_ordering(860) 00:10:19.508 fused_ordering(861) 00:10:19.508 fused_ordering(862) 00:10:19.508 fused_ordering(863) 00:10:19.508 fused_ordering(864) 00:10:19.508 fused_ordering(865) 00:10:19.508 fused_ordering(866) 00:10:19.508 fused_ordering(867) 00:10:19.508 fused_ordering(868) 00:10:19.508 fused_ordering(869) 00:10:19.508 fused_ordering(870) 00:10:19.508 fused_ordering(871) 00:10:19.508 fused_ordering(872) 00:10:19.508 fused_ordering(873) 00:10:19.508 fused_ordering(874) 00:10:19.508 fused_ordering(875) 00:10:19.508 fused_ordering(876) 00:10:19.508 fused_ordering(877) 00:10:19.508 fused_ordering(878) 00:10:19.508 fused_ordering(879) 00:10:19.508 fused_ordering(880) 00:10:19.508 fused_ordering(881) 00:10:19.508 fused_ordering(882) 00:10:19.508 fused_ordering(883) 00:10:19.508 fused_ordering(884) 00:10:19.508 fused_ordering(885) 00:10:19.508 fused_ordering(886) 00:10:19.508 fused_ordering(887) 00:10:19.508 fused_ordering(888) 00:10:19.508 fused_ordering(889) 00:10:19.508 fused_ordering(890) 00:10:19.508 fused_ordering(891) 00:10:19.508 fused_ordering(892) 00:10:19.509 fused_ordering(893) 00:10:19.509 fused_ordering(894) 00:10:19.509 fused_ordering(895) 00:10:19.509 fused_ordering(896) 00:10:19.509 fused_ordering(897) 00:10:19.509 fused_ordering(898) 00:10:19.509 fused_ordering(899) 00:10:19.509 fused_ordering(900) 00:10:19.509 fused_ordering(901) 00:10:19.509 fused_ordering(902) 00:10:19.509 fused_ordering(903) 00:10:19.509 fused_ordering(904) 00:10:19.509 fused_ordering(905) 00:10:19.509 fused_ordering(906) 00:10:19.509 fused_ordering(907) 00:10:19.509 fused_ordering(908) 00:10:19.509 fused_ordering(909) 00:10:19.509 fused_ordering(910) 00:10:19.509 fused_ordering(911) 00:10:19.509 fused_ordering(912) 00:10:19.509 fused_ordering(913) 00:10:19.509 fused_ordering(914) 00:10:19.509 fused_ordering(915) 00:10:19.509 fused_ordering(916) 00:10:19.509 fused_ordering(917) 00:10:19.509 fused_ordering(918) 00:10:19.509 fused_ordering(919) 00:10:19.509 fused_ordering(920) 00:10:19.509 fused_ordering(921) 00:10:19.509 fused_ordering(922) 00:10:19.509 fused_ordering(923) 00:10:19.509 fused_ordering(924) 00:10:19.509 fused_ordering(925) 00:10:19.509 fused_ordering(926) 00:10:19.509 fused_ordering(927) 00:10:19.509 fused_ordering(928) 00:10:19.509 fused_ordering(929) 00:10:19.509 fused_ordering(930) 00:10:19.509 fused_ordering(931) 00:10:19.509 fused_ordering(932) 00:10:19.509 fused_ordering(933) 00:10:19.509 fused_ordering(934) 00:10:19.509 fused_ordering(935) 00:10:19.509 fused_ordering(936) 00:10:19.509 fused_ordering(937) 00:10:19.509 fused_ordering(938) 00:10:19.509 fused_ordering(939) 00:10:19.509 fused_ordering(940) 00:10:19.509 fused_ordering(941) 00:10:19.509 fused_ordering(942) 00:10:19.509 fused_ordering(943) 00:10:19.509 fused_ordering(944) 00:10:19.509 fused_ordering(945) 00:10:19.509 fused_ordering(946) 00:10:19.509 fused_ordering(947) 00:10:19.509 fused_ordering(948) 00:10:19.509 fused_ordering(949) 00:10:19.509 fused_ordering(950) 00:10:19.509 fused_ordering(951) 00:10:19.509 fused_ordering(952) 00:10:19.509 fused_ordering(953) 00:10:19.509 fused_ordering(954) 00:10:19.509 fused_ordering(955) 00:10:19.509 fused_ordering(956) 00:10:19.509 fused_ordering(957) 00:10:19.509 fused_ordering(958) 00:10:19.509 fused_ordering(959) 00:10:19.509 fused_ordering(960) 00:10:19.509 fused_ordering(961) 00:10:19.509 fused_ordering(962) 00:10:19.509 fused_ordering(963) 00:10:19.509 fused_ordering(964) 00:10:19.509 fused_ordering(965) 00:10:19.509 fused_ordering(966) 00:10:19.509 fused_ordering(967) 00:10:19.509 fused_ordering(968) 00:10:19.509 fused_ordering(969) 00:10:19.509 fused_ordering(970) 00:10:19.509 fused_ordering(971) 00:10:19.509 fused_ordering(972) 00:10:19.509 fused_ordering(973) 00:10:19.509 fused_ordering(974) 00:10:19.509 fused_ordering(975) 00:10:19.509 fused_ordering(976) 00:10:19.509 fused_ordering(977) 00:10:19.509 fused_ordering(978) 00:10:19.509 fused_ordering(979) 00:10:19.509 fused_ordering(980) 00:10:19.509 fused_ordering(981) 00:10:19.509 fused_ordering(982) 00:10:19.509 fused_ordering(983) 00:10:19.509 fused_ordering(984) 00:10:19.509 fused_ordering(985) 00:10:19.509 fused_ordering(986) 00:10:19.509 fused_ordering(987) 00:10:19.509 fused_ordering(988) 00:10:19.509 fused_ordering(989) 00:10:19.509 fused_ordering(990) 00:10:19.509 fused_ordering(991) 00:10:19.509 fused_ordering(992) 00:10:19.509 fused_ordering(993) 00:10:19.509 fused_ordering(994) 00:10:19.509 fused_ordering(995) 00:10:19.509 fused_ordering(996) 00:10:19.509 fused_ordering(997) 00:10:19.509 fused_ordering(998) 00:10:19.509 fused_ordering(999) 00:10:19.509 fused_ordering(1000) 00:10:19.509 fused_ordering(1001) 00:10:19.509 fused_ordering(1002) 00:10:19.509 fused_ordering(1003) 00:10:19.509 fused_ordering(1004) 00:10:19.509 fused_ordering(1005) 00:10:19.509 fused_ordering(1006) 00:10:19.509 fused_ordering(1007) 00:10:19.509 fused_ordering(1008) 00:10:19.509 fused_ordering(1009) 00:10:19.509 fused_ordering(1010) 00:10:19.509 fused_ordering(1011) 00:10:19.509 fused_ordering(1012) 00:10:19.509 fused_ordering(1013) 00:10:19.509 fused_ordering(1014) 00:10:19.509 fused_ordering(1015) 00:10:19.509 fused_ordering(1016) 00:10:19.509 fused_ordering(1017) 00:10:19.509 fused_ordering(1018) 00:10:19.509 fused_ordering(1019) 00:10:19.509 fused_ordering(1020) 00:10:19.509 fused_ordering(1021) 00:10:19.509 fused_ordering(1022) 00:10:19.509 fused_ordering(1023) 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.509 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.509 rmmod nvme_tcp 00:10:19.509 rmmod nvme_fabrics 00:10:19.794 rmmod nvme_keyring 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3972429 ']' 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3972429 ']' 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3972429' 00:10:19.794 killing process with pid 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3972429 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.794 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.795 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.795 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.795 17:18:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.795 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.795 17:18:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.337 17:18:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:22.337 00:10:22.337 real 0m10.318s 00:10:22.337 user 0m5.439s 00:10:22.337 sys 0m5.196s 00:10:22.337 17:18:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.337 17:18:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.337 ************************************ 00:10:22.337 END TEST nvmf_fused_ordering 00:10:22.337 ************************************ 00:10:22.337 17:18:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.337 17:18:40 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:22.337 17:18:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.337 17:18:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.337 17:18:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.337 ************************************ 00:10:22.337 START TEST nvmf_delete_subsystem 00:10:22.337 ************************************ 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:22.337 * Looking for test storage... 00:10:22.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.337 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:22.338 17:18:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:27.617 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.617 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:27.618 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:27.618 Found net devices under 0000:86:00.0: cvl_0_0 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:27.618 Found net devices under 0000:86:00.1: cvl_0_1 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:27.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:10:27.618 00:10:27.618 --- 10.0.0.2 ping statistics --- 00:10:27.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.618 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:10:27.618 00:10:27.618 --- 10.0.0.1 ping statistics --- 00:10:27.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.618 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.618 17:18:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3976362 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3976362 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3976362 ']' 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.618 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.618 [2024-07-12 17:18:46.074047] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:10:27.618 [2024-07-12 17:18:46.074092] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.618 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.618 [2024-07-12 17:18:46.132298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:27.618 [2024-07-12 17:18:46.212884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.618 [2024-07-12 17:18:46.212919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.618 [2024-07-12 17:18:46.212926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.618 [2024-07-12 17:18:46.212931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.618 [2024-07-12 17:18:46.212937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.618 [2024-07-12 17:18:46.212999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.618 [2024-07-12 17:18:46.213001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.186 [2024-07-12 17:18:46.933071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.186 [2024-07-12 17:18:46.949193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.186 NULL1 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.186 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.444 Delay0 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3976458 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:28.444 17:18:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:28.444 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.444 [2024-07-12 17:18:47.023737] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:30.346 17:18:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.346 17:18:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.346 17:18:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Write completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 starting I/O failed: -6 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.346 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 [2024-07-12 17:18:49.103030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee5c0 is same with the state(5) to be set 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 starting I/O failed: -6 00:10:30.347 [2024-07-12 17:18:49.103829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f401800cfe0 is same with the state(5) to be set 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Write completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 Read completed with error (sct=0, sc=8) 00:10:30.347 [2024-07-12 17:18:49.104300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4018000c00 is same with the state(5) to be set 00:10:31.723 [2024-07-12 17:18:50.077642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11efac0 is same with the state(5) to be set 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Read completed with error (sct=0, sc=8) 00:10:31.723 Write completed with error (sct=0, sc=8) 00:10:31.724 [2024-07-12 17:18:50.106485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee000 is same with the state(5) to be set 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 [2024-07-12 17:18:50.106585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f401800d600 is same with the state(5) to be set 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 [2024-07-12 17:18:50.107090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee3e0 is same with the state(5) to be set 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 Write completed with error (sct=0, sc=8) 00:10:31.724 Read completed with error (sct=0, sc=8) 00:10:31.724 [2024-07-12 17:18:50.107338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ee7a0 is same with the state(5) to be set 00:10:31.724 Initializing NVMe Controllers 00:10:31.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:31.724 Controller IO queue size 128, less than required. 00:10:31.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:31.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:31.724 Initialization complete. Launching workers. 00:10:31.724 ======================================================== 00:10:31.724 Latency(us) 00:10:31.724 Device Information : IOPS MiB/s Average min max 00:10:31.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.32 0.09 955785.42 718.09 1011282.56 00:10:31.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.01 0.07 885502.01 250.11 1010387.79 00:10:31.724 ======================================================== 00:10:31.724 Total : 335.34 0.16 923715.36 250.11 1011282.56 00:10:31.724 00:10:31.724 [2024-07-12 17:18:50.108024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11efac0 (9): Bad file descriptor 00:10:31.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:31.724 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.724 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:31.724 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3976458 00:10:31.724 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3976458 00:10:31.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3976458) - No such process 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3976458 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3976458 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3976458 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.983 [2024-07-12 17:18:50.637627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3977138 00:10:31.983 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:31.984 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:31.984 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:31.984 17:18:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:31.984 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.984 [2024-07-12 17:18:50.704830] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:32.552 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:32.552 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:32.552 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.121 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.121 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:33.121 17:18:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.690 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.690 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:33.690 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.949 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:33.949 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:33.949 17:18:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.526 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.526 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:34.526 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.095 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.095 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:35.095 17:18:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.095 Initializing NVMe Controllers 00:10:35.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.095 Controller IO queue size 128, less than required. 00:10:35.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:35.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:35.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:35.095 Initialization complete. Launching workers. 00:10:35.095 ======================================================== 00:10:35.095 Latency(us) 00:10:35.095 Device Information : IOPS MiB/s Average min max 00:10:35.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003437.39 1000191.92 1041938.87 00:10:35.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005268.33 1000222.44 1042821.99 00:10:35.095 ======================================================== 00:10:35.095 Total : 256.00 0.12 1004352.86 1000191.92 1042821.99 00:10:35.095 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3977138 00:10:35.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3977138) - No such process 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3977138 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:35.684 rmmod nvme_tcp 00:10:35.684 rmmod nvme_fabrics 00:10:35.684 rmmod nvme_keyring 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3976362 ']' 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3976362 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3976362 ']' 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3976362 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3976362 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3976362' 00:10:35.684 killing process with pid 3976362 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3976362 00:10:35.684 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3976362 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.942 17:18:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.842 17:18:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:37.842 00:10:37.842 real 0m15.867s 00:10:37.842 user 0m30.212s 00:10:37.842 sys 0m4.697s 00:10:37.842 17:18:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.842 17:18:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.842 ************************************ 00:10:37.842 END TEST nvmf_delete_subsystem 00:10:37.842 ************************************ 00:10:37.842 17:18:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:37.842 17:18:56 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:37.842 17:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:37.842 17:18:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.842 17:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:37.842 ************************************ 00:10:37.842 START TEST nvmf_ns_masking 00:10:37.842 ************************************ 00:10:37.842 17:18:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:38.101 * Looking for test storage... 00:10:38.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.101 17:18:56 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=83e2752a-3243-48f4-9f2b-3e6f506fd661 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a97eb942-2977-45f3-8357-651df58a68b1 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c289a633-9932-4c8a-8919-aae714ee3191 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.102 17:18:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.394 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.395 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.395 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:43.395 00:10:43.395 --- 10.0.0.2 ping statistics --- 00:10:43.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.395 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:10:43.395 00:10:43.395 --- 10.0.0.1 ping statistics --- 00:10:43.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.395 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.395 17:19:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3981133 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3981133 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3981133 ']' 00:10:43.395 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.396 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.396 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.396 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.396 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:43.396 [2024-07-12 17:19:02.060963] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:10:43.396 [2024-07-12 17:19:02.061005] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.396 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.396 [2024-07-12 17:19:02.117218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.655 [2024-07-12 17:19:02.196495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.655 [2024-07-12 17:19:02.196529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.655 [2024-07-12 17:19:02.196536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.655 [2024-07-12 17:19:02.196542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.655 [2024-07-12 17:19:02.196547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.655 [2024-07-12 17:19:02.196566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.237 17:19:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:44.500 [2024-07-12 17:19:03.043998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.500 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:44.500 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:44.500 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:44.500 Malloc1 00:10:44.500 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:44.761 Malloc2 00:10:44.761 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.049 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:45.049 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.310 [2024-07-12 17:19:03.950906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.310 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:45.310 17:19:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c289a633-9932-4c8a-8919-aae714ee3191 -a 10.0.0.2 -s 4420 -i 4 00:10:45.569 17:19:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.569 17:19:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:45.569 17:19:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.569 17:19:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:45.569 17:19:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.475 [ 0]:0x1 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f522988f3d4447f9904d199a630a2c7 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f522988f3d4447f9904d199a630a2c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.475 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:47.734 [ 0]:0x1 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f522988f3d4447f9904d199a630a2c7 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f522988f3d4447f9904d199a630a2c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:47.734 [ 1]:0x2 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:47.734 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.993 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.993 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:48.252 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:48.252 17:19:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c289a633-9932-4c8a-8919-aae714ee3191 -a 10.0.0.2 -s 4420 -i 4 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:48.511 17:19:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:50.416 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:50.416 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.417 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.675 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:50.675 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.675 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:50.675 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.676 [ 0]:0x2 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.676 [ 0]:0x1 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.676 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f522988f3d4447f9904d199a630a2c7 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f522988f3d4447f9904d199a630a2c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.935 [ 1]:0x2 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.935 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.936 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.936 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:51.195 [ 0]:0x2 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.195 17:19:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c289a633-9932-4c8a-8919-aae714ee3191 -a 10.0.0.2 -s 4420 -i 4 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:51.454 17:19:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.009 [ 0]:0x1 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f522988f3d4447f9904d199a630a2c7 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f522988f3d4447f9904d199a630a2c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:54.009 [ 1]:0x2 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:54.009 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:54.010 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:54.268 [ 0]:0x2 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:54.268 17:19:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.268 [2024-07-12 17:19:13.020786] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:54.268 request: 00:10:54.268 { 00:10:54.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.268 "nsid": 2, 00:10:54.268 "host": "nqn.2016-06.io.spdk:host1", 00:10:54.268 "method": "nvmf_ns_remove_host", 00:10:54.268 "req_id": 1 00:10:54.268 } 00:10:54.268 Got JSON-RPC error response 00:10:54.268 response: 00:10:54.268 { 00:10:54.268 "code": -32602, 00:10:54.268 "message": "Invalid parameters" 00:10:54.268 } 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:54.268 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.269 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:54.526 [ 0]:0x2 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6d418f767c4d42e68c33b5003cf8f41f 00:10:54.526 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6d418f767c4d42e68c33b5003cf8f41f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3983130 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3983130 /var/tmp/host.sock 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3983130 ']' 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:54.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.527 17:19:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:54.527 [2024-07-12 17:19:13.247561] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:10:54.527 [2024-07-12 17:19:13.247606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983130 ] 00:10:54.527 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.527 [2024-07-12 17:19:13.301307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.784 [2024-07-12 17:19:13.379281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.349 17:19:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.349 17:19:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:55.349 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.607 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 83e2752a-3243-48f4-9f2b-3e6f506fd661 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 83E2752A324348F49F2B3E6F506FD661 -i 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a97eb942-2977-45f3-8357-651df58a68b1 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:55.867 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A97EB942297745F38357651DF58A68B1 -i 00:10:56.126 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:56.384 17:19:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:56.384 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:56.384 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:56.642 nvme0n1 00:10:56.642 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:56.642 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:56.900 nvme1n2 00:10:56.900 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:56.900 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:56.900 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:56.900 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:56.900 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:57.159 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:57.159 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:57.159 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:57.159 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:57.418 17:19:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 83e2752a-3243-48f4-9f2b-3e6f506fd661 == \8\3\e\2\7\5\2\a\-\3\2\4\3\-\4\8\f\4\-\9\f\2\b\-\3\e\6\f\5\0\6\f\d\6\6\1 ]] 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a97eb942-2977-45f3-8357-651df58a68b1 == \a\9\7\e\b\9\4\2\-\2\9\7\7\-\4\5\f\3\-\8\3\5\7\-\6\5\1\d\f\5\8\a\6\8\b\1 ]] 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3983130 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3983130 ']' 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3983130 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.418 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3983130 00:10:57.698 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:57.698 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:57.698 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3983130' 00:10:57.698 killing process with pid 3983130 00:10:57.698 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3983130 00:10:57.698 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3983130 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.957 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.957 rmmod nvme_tcp 00:10:57.957 rmmod nvme_fabrics 00:10:58.216 rmmod nvme_keyring 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3981133 ']' 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3981133 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3981133 ']' 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3981133 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3981133 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3981133' 00:10:58.216 killing process with pid 3981133 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3981133 00:10:58.216 17:19:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3981133 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.475 17:19:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.379 17:19:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.379 00:11:00.379 real 0m22.473s 00:11:00.379 user 0m24.202s 00:11:00.380 sys 0m6.030s 00:11:00.380 17:19:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.380 17:19:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.380 ************************************ 00:11:00.380 END TEST nvmf_ns_masking 00:11:00.380 ************************************ 00:11:00.380 17:19:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:00.380 17:19:19 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:00.380 17:19:19 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:00.380 17:19:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.380 17:19:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.380 17:19:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.380 ************************************ 00:11:00.380 START TEST nvmf_nvme_cli 00:11:00.380 ************************************ 00:11:00.380 17:19:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:00.638 * Looking for test storage... 00:11:00.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.638 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.639 17:19:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:05.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:05.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:05.916 Found net devices under 0000:86:00.0: cvl_0_0 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:05.916 Found net devices under 0000:86:00.1: cvl_0_1 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:11:05.916 00:11:05.916 --- 10.0.0.2 ping statistics --- 00:11:05.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.916 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:05.916 00:11:05.916 --- 10.0.0.1 ping statistics --- 00:11:05.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.916 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3987156 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3987156 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3987156 ']' 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.916 17:19:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:05.916 [2024-07-12 17:19:24.640548] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:11:05.917 [2024-07-12 17:19:24.640590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.917 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.176 [2024-07-12 17:19:24.697464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.176 [2024-07-12 17:19:24.779221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.176 [2024-07-12 17:19:24.779259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.176 [2024-07-12 17:19:24.779267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.176 [2024-07-12 17:19:24.779273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.176 [2024-07-12 17:19:24.779278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.176 [2024-07-12 17:19:24.779314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.176 [2024-07-12 17:19:24.779412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.176 [2024-07-12 17:19:24.779480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.176 [2024-07-12 17:19:24.779481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:06.747 [2024-07-12 17:19:25.499209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.747 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 Malloc0 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 Malloc1 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 [2024-07-12 17:19:25.580293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:07.037 00:11:07.037 Discovery Log Number of Records 2, Generation counter 2 00:11:07.037 =====Discovery Log Entry 0====== 00:11:07.037 trtype: tcp 00:11:07.037 adrfam: ipv4 00:11:07.037 subtype: current discovery subsystem 00:11:07.037 treq: not required 00:11:07.037 portid: 0 00:11:07.037 trsvcid: 4420 00:11:07.037 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:07.037 traddr: 10.0.0.2 00:11:07.037 eflags: explicit discovery connections, duplicate discovery information 00:11:07.037 sectype: none 00:11:07.037 =====Discovery Log Entry 1====== 00:11:07.037 trtype: tcp 00:11:07.037 adrfam: ipv4 00:11:07.037 subtype: nvme subsystem 00:11:07.037 treq: not required 00:11:07.037 portid: 0 00:11:07.037 trsvcid: 4420 00:11:07.037 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:07.037 traddr: 10.0.0.2 00:11:07.037 eflags: none 00:11:07.037 sectype: none 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:07.037 17:19:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:08.413 17:19:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:10.317 /dev/nvme0n1 ]] 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.317 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:10.318 17:19:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.318 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.577 rmmod nvme_tcp 00:11:10.577 rmmod nvme_fabrics 00:11:10.577 rmmod nvme_keyring 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3987156 ']' 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3987156 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3987156 ']' 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3987156 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3987156 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3987156' 00:11:10.577 killing process with pid 3987156 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3987156 00:11:10.577 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3987156 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.836 17:19:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.742 17:19:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.742 00:11:12.742 real 0m12.341s 00:11:12.742 user 0m20.025s 00:11:12.742 sys 0m4.553s 00:11:12.742 17:19:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.742 17:19:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:12.742 ************************************ 00:11:12.742 END TEST nvmf_nvme_cli 00:11:12.742 ************************************ 00:11:13.002 17:19:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:13.002 17:19:31 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:13.002 17:19:31 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:13.002 17:19:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:13.002 17:19:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.002 17:19:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:13.002 ************************************ 00:11:13.002 START TEST nvmf_vfio_user 00:11:13.002 ************************************ 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:13.002 * Looking for test storage... 00:11:13.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3988447 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3988447' 00:11:13.002 Process pid: 3988447 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3988447 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3988447 ']' 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.002 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.003 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.003 17:19:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:13.003 [2024-07-12 17:19:31.704236] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:11:13.003 [2024-07-12 17:19:31.704284] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.003 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.003 [2024-07-12 17:19:31.761414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.262 [2024-07-12 17:19:31.846691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.262 [2024-07-12 17:19:31.846726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.262 [2024-07-12 17:19:31.846733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.262 [2024-07-12 17:19:31.846740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.262 [2024-07-12 17:19:31.846745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.262 [2024-07-12 17:19:31.846786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.262 [2024-07-12 17:19:31.846804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.262 [2024-07-12 17:19:31.846824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.262 [2024-07-12 17:19:31.846825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.831 17:19:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.831 17:19:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:13.831 17:19:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:14.768 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:15.026 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:15.026 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:15.026 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:15.026 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:15.026 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:15.284 Malloc1 00:11:15.285 17:19:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:15.544 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:15.544 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:15.802 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:15.802 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:15.802 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:16.061 Malloc2 00:11:16.061 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:16.061 17:19:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:16.320 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:16.581 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:16.581 [2024-07-12 17:19:35.240018] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:11:16.581 [2024-07-12 17:19:35.240056] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989149 ] 00:11:16.581 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.581 [2024-07-12 17:19:35.269905] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:16.581 [2024-07-12 17:19:35.277725] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:16.581 [2024-07-12 17:19:35.277743] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0b587d7000 00:11:16.581 [2024-07-12 17:19:35.278718] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.279717] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.280730] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.281742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.282749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.283753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.284755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.285753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:16.581 [2024-07-12 17:19:35.286767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:16.581 [2024-07-12 17:19:35.286776] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0b587cc000 00:11:16.581 [2024-07-12 17:19:35.287716] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:16.581 [2024-07-12 17:19:35.298322] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:16.581 [2024-07-12 17:19:35.298348] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:16.581 [2024-07-12 17:19:35.302863] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:16.581 [2024-07-12 17:19:35.302901] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:16.581 [2024-07-12 17:19:35.302973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:16.581 [2024-07-12 17:19:35.302991] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:16.581 [2024-07-12 17:19:35.302996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:16.581 [2024-07-12 17:19:35.303857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:16.581 [2024-07-12 17:19:35.303866] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:16.581 [2024-07-12 17:19:35.303872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:16.581 [2024-07-12 17:19:35.304869] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:16.581 [2024-07-12 17:19:35.304877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:16.581 [2024-07-12 17:19:35.304884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.305870] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:16.581 [2024-07-12 17:19:35.305878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.306875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:16.581 [2024-07-12 17:19:35.306883] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:16.581 [2024-07-12 17:19:35.306887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.306893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.306998] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:16.581 [2024-07-12 17:19:35.307002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.307007] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:16.581 [2024-07-12 17:19:35.307877] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:16.581 [2024-07-12 17:19:35.308881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:16.581 [2024-07-12 17:19:35.309886] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:16.581 [2024-07-12 17:19:35.310887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:16.581 [2024-07-12 17:19:35.310951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:16.581 [2024-07-12 17:19:35.311899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:16.581 [2024-07-12 17:19:35.311907] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:16.581 [2024-07-12 17:19:35.311911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.311928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:16.581 [2024-07-12 17:19:35.311935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.311950] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:16.581 [2024-07-12 17:19:35.311954] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:16.581 [2024-07-12 17:19:35.311967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:16.581 [2024-07-12 17:19:35.312007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:16.581 [2024-07-12 17:19:35.312016] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:16.581 [2024-07-12 17:19:35.312022] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:16.581 [2024-07-12 17:19:35.312026] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:16.581 [2024-07-12 17:19:35.312031] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:16.581 [2024-07-12 17:19:35.312035] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:16.581 [2024-07-12 17:19:35.312039] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:16.581 [2024-07-12 17:19:35.312043] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.312049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.312059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:16.581 [2024-07-12 17:19:35.312070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:16.581 [2024-07-12 17:19:35.312084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.581 [2024-07-12 17:19:35.312092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.581 [2024-07-12 17:19:35.312099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.581 [2024-07-12 17:19:35.312107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.581 [2024-07-12 17:19:35.312111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.312118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:16.581 [2024-07-12 17:19:35.312129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:16.581 [2024-07-12 17:19:35.312142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312147] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:16.582 [2024-07-12 17:19:35.312152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312233] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:16.582 [2024-07-12 17:19:35.312251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:16.582 [2024-07-12 17:19:35.312257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312276] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:16.582 [2024-07-12 17:19:35.312284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:16.582 [2024-07-12 17:19:35.312300] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:16.582 [2024-07-12 17:19:35.312305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312351] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:16.582 [2024-07-12 17:19:35.312355] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:16.582 [2024-07-12 17:19:35.312362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312417] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:16.582 [2024-07-12 17:19:35.312421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:16.582 [2024-07-12 17:19:35.312425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:16.582 [2024-07-12 17:19:35.312442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312527] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:16.582 [2024-07-12 17:19:35.312531] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:16.582 [2024-07-12 17:19:35.312534] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:16.582 [2024-07-12 17:19:35.312537] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:16.582 [2024-07-12 17:19:35.312542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:16.582 [2024-07-12 17:19:35.312549] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:16.582 [2024-07-12 17:19:35.312552] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:16.582 [2024-07-12 17:19:35.312560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312566] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:16.582 [2024-07-12 17:19:35.312570] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:16.582 [2024-07-12 17:19:35.312575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312581] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:16.582 [2024-07-12 17:19:35.312585] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:16.582 [2024-07-12 17:19:35.312590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:16.582 [2024-07-12 17:19:35.312596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:16.582 [2024-07-12 17:19:35.312623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:16.582 ===================================================== 00:11:16.582 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:16.582 ===================================================== 00:11:16.582 Controller Capabilities/Features 00:11:16.582 ================================ 00:11:16.582 Vendor ID: 4e58 00:11:16.582 Subsystem Vendor ID: 4e58 00:11:16.582 Serial Number: SPDK1 00:11:16.582 Model Number: SPDK bdev Controller 00:11:16.582 Firmware Version: 24.09 00:11:16.582 Recommended Arb Burst: 6 00:11:16.582 IEEE OUI Identifier: 8d 6b 50 00:11:16.582 Multi-path I/O 00:11:16.582 May have multiple subsystem ports: Yes 00:11:16.582 May have multiple controllers: Yes 00:11:16.582 Associated with SR-IOV VF: No 00:11:16.582 Max Data Transfer Size: 131072 00:11:16.582 Max Number of Namespaces: 32 00:11:16.582 Max Number of I/O Queues: 127 00:11:16.582 NVMe Specification Version (VS): 1.3 00:11:16.582 NVMe Specification Version (Identify): 1.3 00:11:16.582 Maximum Queue Entries: 256 00:11:16.582 Contiguous Queues Required: Yes 00:11:16.582 Arbitration Mechanisms Supported 00:11:16.582 Weighted Round Robin: Not Supported 00:11:16.582 Vendor Specific: Not Supported 00:11:16.582 Reset Timeout: 15000 ms 00:11:16.582 Doorbell Stride: 4 bytes 00:11:16.582 NVM Subsystem Reset: Not Supported 00:11:16.582 Command Sets Supported 00:11:16.582 NVM Command Set: Supported 00:11:16.582 Boot Partition: Not Supported 00:11:16.582 Memory Page Size Minimum: 4096 bytes 00:11:16.582 Memory Page Size Maximum: 4096 bytes 00:11:16.582 Persistent Memory Region: Not Supported 00:11:16.582 Optional Asynchronous Events Supported 00:11:16.582 Namespace Attribute Notices: Supported 00:11:16.582 Firmware Activation Notices: Not Supported 00:11:16.582 ANA Change Notices: Not Supported 00:11:16.582 PLE Aggregate Log Change Notices: Not Supported 00:11:16.582 LBA Status Info Alert Notices: Not Supported 00:11:16.582 EGE Aggregate Log Change Notices: Not Supported 00:11:16.582 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.582 Zone Descriptor Change Notices: Not Supported 00:11:16.582 Discovery Log Change Notices: Not Supported 00:11:16.582 Controller Attributes 00:11:16.582 128-bit Host Identifier: Supported 00:11:16.582 Non-Operational Permissive Mode: Not Supported 00:11:16.582 NVM Sets: Not Supported 00:11:16.582 Read Recovery Levels: Not Supported 00:11:16.582 Endurance Groups: Not Supported 00:11:16.582 Predictable Latency Mode: Not Supported 00:11:16.582 Traffic Based Keep ALive: Not Supported 00:11:16.582 Namespace Granularity: Not Supported 00:11:16.582 SQ Associations: Not Supported 00:11:16.582 UUID List: Not Supported 00:11:16.582 Multi-Domain Subsystem: Not Supported 00:11:16.582 Fixed Capacity Management: Not Supported 00:11:16.583 Variable Capacity Management: Not Supported 00:11:16.583 Delete Endurance Group: Not Supported 00:11:16.583 Delete NVM Set: Not Supported 00:11:16.583 Extended LBA Formats Supported: Not Supported 00:11:16.583 Flexible Data Placement Supported: Not Supported 00:11:16.583 00:11:16.583 Controller Memory Buffer Support 00:11:16.583 ================================ 00:11:16.583 Supported: No 00:11:16.583 00:11:16.583 Persistent Memory Region Support 00:11:16.583 ================================ 00:11:16.583 Supported: No 00:11:16.583 00:11:16.583 Admin Command Set Attributes 00:11:16.583 ============================ 00:11:16.583 Security Send/Receive: Not Supported 00:11:16.583 Format NVM: Not Supported 00:11:16.583 Firmware Activate/Download: Not Supported 00:11:16.583 Namespace Management: Not Supported 00:11:16.583 Device Self-Test: Not Supported 00:11:16.583 Directives: Not Supported 00:11:16.583 NVMe-MI: Not Supported 00:11:16.583 Virtualization Management: Not Supported 00:11:16.583 Doorbell Buffer Config: Not Supported 00:11:16.583 Get LBA Status Capability: Not Supported 00:11:16.583 Command & Feature Lockdown Capability: Not Supported 00:11:16.583 Abort Command Limit: 4 00:11:16.583 Async Event Request Limit: 4 00:11:16.583 Number of Firmware Slots: N/A 00:11:16.583 Firmware Slot 1 Read-Only: N/A 00:11:16.583 Firmware Activation Without Reset: N/A 00:11:16.583 Multiple Update Detection Support: N/A 00:11:16.583 Firmware Update Granularity: No Information Provided 00:11:16.583 Per-Namespace SMART Log: No 00:11:16.583 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.583 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:16.583 Command Effects Log Page: Supported 00:11:16.583 Get Log Page Extended Data: Supported 00:11:16.583 Telemetry Log Pages: Not Supported 00:11:16.583 Persistent Event Log Pages: Not Supported 00:11:16.583 Supported Log Pages Log Page: May Support 00:11:16.583 Commands Supported & Effects Log Page: Not Supported 00:11:16.583 Feature Identifiers & Effects Log Page:May Support 00:11:16.583 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.583 Data Area 4 for Telemetry Log: Not Supported 00:11:16.583 Error Log Page Entries Supported: 128 00:11:16.583 Keep Alive: Supported 00:11:16.583 Keep Alive Granularity: 10000 ms 00:11:16.583 00:11:16.583 NVM Command Set Attributes 00:11:16.583 ========================== 00:11:16.583 Submission Queue Entry Size 00:11:16.583 Max: 64 00:11:16.583 Min: 64 00:11:16.583 Completion Queue Entry Size 00:11:16.583 Max: 16 00:11:16.583 Min: 16 00:11:16.583 Number of Namespaces: 32 00:11:16.583 Compare Command: Supported 00:11:16.583 Write Uncorrectable Command: Not Supported 00:11:16.583 Dataset Management Command: Supported 00:11:16.583 Write Zeroes Command: Supported 00:11:16.583 Set Features Save Field: Not Supported 00:11:16.583 Reservations: Not Supported 00:11:16.583 Timestamp: Not Supported 00:11:16.583 Copy: Supported 00:11:16.583 Volatile Write Cache: Present 00:11:16.583 Atomic Write Unit (Normal): 1 00:11:16.583 Atomic Write Unit (PFail): 1 00:11:16.583 Atomic Compare & Write Unit: 1 00:11:16.583 Fused Compare & Write: Supported 00:11:16.583 Scatter-Gather List 00:11:16.583 SGL Command Set: Supported (Dword aligned) 00:11:16.583 SGL Keyed: Not Supported 00:11:16.583 SGL Bit Bucket Descriptor: Not Supported 00:11:16.583 SGL Metadata Pointer: Not Supported 00:11:16.583 Oversized SGL: Not Supported 00:11:16.583 SGL Metadata Address: Not Supported 00:11:16.583 SGL Offset: Not Supported 00:11:16.583 Transport SGL Data Block: Not Supported 00:11:16.583 Replay Protected Memory Block: Not Supported 00:11:16.583 00:11:16.583 Firmware Slot Information 00:11:16.583 ========================= 00:11:16.583 Active slot: 1 00:11:16.583 Slot 1 Firmware Revision: 24.09 00:11:16.583 00:11:16.583 00:11:16.583 Commands Supported and Effects 00:11:16.583 ============================== 00:11:16.583 Admin Commands 00:11:16.583 -------------- 00:11:16.583 Get Log Page (02h): Supported 00:11:16.583 Identify (06h): Supported 00:11:16.583 Abort (08h): Supported 00:11:16.583 Set Features (09h): Supported 00:11:16.583 Get Features (0Ah): Supported 00:11:16.583 Asynchronous Event Request (0Ch): Supported 00:11:16.583 Keep Alive (18h): Supported 00:11:16.583 I/O Commands 00:11:16.583 ------------ 00:11:16.583 Flush (00h): Supported LBA-Change 00:11:16.583 Write (01h): Supported LBA-Change 00:11:16.583 Read (02h): Supported 00:11:16.583 Compare (05h): Supported 00:11:16.583 Write Zeroes (08h): Supported LBA-Change 00:11:16.583 Dataset Management (09h): Supported LBA-Change 00:11:16.583 Copy (19h): Supported LBA-Change 00:11:16.583 00:11:16.583 Error Log 00:11:16.583 ========= 00:11:16.583 00:11:16.583 Arbitration 00:11:16.583 =========== 00:11:16.583 Arbitration Burst: 1 00:11:16.583 00:11:16.583 Power Management 00:11:16.583 ================ 00:11:16.583 Number of Power States: 1 00:11:16.583 Current Power State: Power State #0 00:11:16.583 Power State #0: 00:11:16.583 Max Power: 0.00 W 00:11:16.583 Non-Operational State: Operational 00:11:16.583 Entry Latency: Not Reported 00:11:16.583 Exit Latency: Not Reported 00:11:16.583 Relative Read Throughput: 0 00:11:16.583 Relative Read Latency: 0 00:11:16.583 Relative Write Throughput: 0 00:11:16.583 Relative Write Latency: 0 00:11:16.583 Idle Power: Not Reported 00:11:16.583 Active Power: Not Reported 00:11:16.583 Non-Operational Permissive Mode: Not Supported 00:11:16.583 00:11:16.583 Health Information 00:11:16.583 ================== 00:11:16.583 Critical Warnings: 00:11:16.583 Available Spare Space: OK 00:11:16.583 Temperature: OK 00:11:16.583 Device Reliability: OK 00:11:16.583 Read Only: No 00:11:16.583 Volatile Memory Backup: OK 00:11:16.583 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:16.583 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:16.583 Available Spare: 0% 00:11:16.583 Available Sp[2024-07-12 17:19:35.312707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:16.583 [2024-07-12 17:19:35.312718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:16.583 [2024-07-12 17:19:35.312745] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:16.583 [2024-07-12 17:19:35.312754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.583 [2024-07-12 17:19:35.312759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.583 [2024-07-12 17:19:35.312765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.583 [2024-07-12 17:19:35.312770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.583 [2024-07-12 17:19:35.312901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:16.583 [2024-07-12 17:19:35.312911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:16.583 [2024-07-12 17:19:35.313904] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:16.583 [2024-07-12 17:19:35.313953] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:16.583 [2024-07-12 17:19:35.313959] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:16.583 [2024-07-12 17:19:35.314908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:16.583 [2024-07-12 17:19:35.314918] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:16.583 [2024-07-12 17:19:35.314966] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:16.583 [2024-07-12 17:19:35.320383] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:16.583 are Threshold: 0% 00:11:16.583 Life Percentage Used: 0% 00:11:16.583 Data Units Read: 0 00:11:16.583 Data Units Written: 0 00:11:16.583 Host Read Commands: 0 00:11:16.583 Host Write Commands: 0 00:11:16.583 Controller Busy Time: 0 minutes 00:11:16.583 Power Cycles: 0 00:11:16.583 Power On Hours: 0 hours 00:11:16.583 Unsafe Shutdowns: 0 00:11:16.583 Unrecoverable Media Errors: 0 00:11:16.583 Lifetime Error Log Entries: 0 00:11:16.583 Warning Temperature Time: 0 minutes 00:11:16.583 Critical Temperature Time: 0 minutes 00:11:16.583 00:11:16.583 Number of Queues 00:11:16.583 ================ 00:11:16.583 Number of I/O Submission Queues: 127 00:11:16.583 Number of I/O Completion Queues: 127 00:11:16.583 00:11:16.583 Active Namespaces 00:11:16.583 ================= 00:11:16.583 Namespace ID:1 00:11:16.583 Error Recovery Timeout: Unlimited 00:11:16.583 Command Set Identifier: NVM (00h) 00:11:16.584 Deallocate: Supported 00:11:16.584 Deallocated/Unwritten Error: Not Supported 00:11:16.584 Deallocated Read Value: Unknown 00:11:16.584 Deallocate in Write Zeroes: Not Supported 00:11:16.584 Deallocated Guard Field: 0xFFFF 00:11:16.584 Flush: Supported 00:11:16.584 Reservation: Supported 00:11:16.584 Namespace Sharing Capabilities: Multiple Controllers 00:11:16.584 Size (in LBAs): 131072 (0GiB) 00:11:16.584 Capacity (in LBAs): 131072 (0GiB) 00:11:16.584 Utilization (in LBAs): 131072 (0GiB) 00:11:16.584 NGUID: F0768B4BFB4C4DBD9649FFDE18163FD5 00:11:16.584 UUID: f0768b4b-fb4c-4dbd-9649-ffde18163fd5 00:11:16.584 Thin Provisioning: Not Supported 00:11:16.584 Per-NS Atomic Units: Yes 00:11:16.584 Atomic Boundary Size (Normal): 0 00:11:16.584 Atomic Boundary Size (PFail): 0 00:11:16.584 Atomic Boundary Offset: 0 00:11:16.584 Maximum Single Source Range Length: 65535 00:11:16.584 Maximum Copy Length: 65535 00:11:16.584 Maximum Source Range Count: 1 00:11:16.584 NGUID/EUI64 Never Reused: No 00:11:16.584 Namespace Write Protected: No 00:11:16.584 Number of LBA Formats: 1 00:11:16.584 Current LBA Format: LBA Format #00 00:11:16.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.584 00:11:16.584 17:19:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:16.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.843 [2024-07-12 17:19:35.536145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:22.117 Initializing NVMe Controllers 00:11:22.117 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:22.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:22.117 Initialization complete. Launching workers. 00:11:22.117 ======================================================== 00:11:22.117 Latency(us) 00:11:22.117 Device Information : IOPS MiB/s Average min max 00:11:22.117 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39926.08 155.96 3205.73 944.85 7331.84 00:11:22.117 ======================================================== 00:11:22.117 Total : 39926.08 155.96 3205.73 944.85 7331.84 00:11:22.117 00:11:22.117 [2024-07-12 17:19:40.554518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:22.117 17:19:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:22.117 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.117 [2024-07-12 17:19:40.780598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:27.391 Initializing NVMe Controllers 00:11:27.391 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:27.391 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:27.391 Initialization complete. Launching workers. 00:11:27.391 ======================================================== 00:11:27.391 Latency(us) 00:11:27.391 Device Information : IOPS MiB/s Average min max 00:11:27.391 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.22 62.71 7978.32 5986.08 9977.23 00:11:27.391 ======================================================== 00:11:27.391 Total : 16054.22 62.71 7978.32 5986.08 9977.23 00:11:27.391 00:11:27.391 [2024-07-12 17:19:45.824047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:27.391 17:19:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:27.391 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.391 [2024-07-12 17:19:46.025033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:32.661 [2024-07-12 17:19:51.131862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:32.661 Initializing NVMe Controllers 00:11:32.661 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:32.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:32.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:32.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:32.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:32.661 Initialization complete. Launching workers. 00:11:32.661 Starting thread on core 2 00:11:32.661 Starting thread on core 3 00:11:32.661 Starting thread on core 1 00:11:32.661 17:19:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:32.661 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.661 [2024-07-12 17:19:51.414823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.916 [2024-07-12 17:19:55.201573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.916 Initializing NVMe Controllers 00:11:36.916 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.916 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:36.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:36.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:36.917 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:36.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:36.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:36.917 Initialization complete. Launching workers. 00:11:36.917 Starting thread on core 1 with urgent priority queue 00:11:36.917 Starting thread on core 2 with urgent priority queue 00:11:36.917 Starting thread on core 3 with urgent priority queue 00:11:36.917 Starting thread on core 0 with urgent priority queue 00:11:36.917 SPDK bdev Controller (SPDK1 ) core 0: 4637.33 IO/s 21.56 secs/100000 ios 00:11:36.917 SPDK bdev Controller (SPDK1 ) core 1: 5180.33 IO/s 19.30 secs/100000 ios 00:11:36.917 SPDK bdev Controller (SPDK1 ) core 2: 4994.00 IO/s 20.02 secs/100000 ios 00:11:36.917 SPDK bdev Controller (SPDK1 ) core 3: 4425.33 IO/s 22.60 secs/100000 ios 00:11:36.917 ======================================================== 00:11:36.917 00:11:36.917 17:19:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:36.917 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.917 [2024-07-12 17:19:55.474874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.917 Initializing NVMe Controllers 00:11:36.917 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.917 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:36.917 Namespace ID: 1 size: 0GB 00:11:36.917 Initialization complete. 00:11:36.917 INFO: using host memory buffer for IO 00:11:36.917 Hello world! 00:11:36.917 [2024-07-12 17:19:55.507069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.917 17:19:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:36.917 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.174 [2024-07-12 17:19:55.774458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:38.110 Initializing NVMe Controllers 00:11:38.110 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:38.110 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:38.110 Initialization complete. Launching workers. 00:11:38.110 submit (in ns) avg, min, max = 6906.7, 3285.2, 3998788.7 00:11:38.110 complete (in ns) avg, min, max = 20928.3, 1819.1, 4181380.0 00:11:38.110 00:11:38.111 Submit histogram 00:11:38.111 ================ 00:11:38.111 Range in us Cumulative Count 00:11:38.111 3.283 - 3.297: 0.0244% ( 4) 00:11:38.111 3.297 - 3.311: 0.0794% ( 9) 00:11:38.111 3.311 - 3.325: 0.1709% ( 15) 00:11:38.111 3.325 - 3.339: 1.1050% ( 153) 00:11:38.111 3.339 - 3.353: 4.1453% ( 498) 00:11:38.111 3.353 - 3.367: 9.5665% ( 888) 00:11:38.111 3.367 - 3.381: 15.2564% ( 932) 00:11:38.111 3.381 - 3.395: 21.7460% ( 1063) 00:11:38.111 3.395 - 3.409: 27.9731% ( 1020) 00:11:38.111 3.409 - 3.423: 33.1685% ( 851) 00:11:38.111 3.423 - 3.437: 38.4310% ( 862) 00:11:38.111 3.437 - 3.450: 43.8950% ( 895) 00:11:38.111 3.450 - 3.464: 48.5165% ( 757) 00:11:38.111 3.464 - 3.478: 52.6068% ( 670) 00:11:38.111 3.478 - 3.492: 58.0403% ( 890) 00:11:38.111 3.492 - 3.506: 65.5311% ( 1227) 00:11:38.111 3.506 - 3.520: 70.0122% ( 734) 00:11:38.111 3.520 - 3.534: 74.6093% ( 753) 00:11:38.111 3.534 - 3.548: 79.8657% ( 861) 00:11:38.111 3.548 - 3.562: 83.6996% ( 628) 00:11:38.111 3.562 - 3.590: 87.0330% ( 546) 00:11:38.111 3.590 - 3.617: 87.9976% ( 158) 00:11:38.111 3.617 - 3.645: 88.9072% ( 149) 00:11:38.111 3.645 - 3.673: 90.3297% ( 233) 00:11:38.111 3.673 - 3.701: 92.1001% ( 290) 00:11:38.111 3.701 - 3.729: 93.7729% ( 274) 00:11:38.111 3.729 - 3.757: 95.4884% ( 281) 00:11:38.111 3.757 - 3.784: 97.0147% ( 250) 00:11:38.111 3.784 - 3.812: 98.1868% ( 192) 00:11:38.111 3.812 - 3.840: 98.8095% ( 102) 00:11:38.111 3.840 - 3.868: 99.2369% ( 70) 00:11:38.111 3.868 - 3.896: 99.5238% ( 47) 00:11:38.111 3.896 - 3.923: 99.6093% ( 14) 00:11:38.111 3.923 - 3.951: 99.6337% ( 4) 00:11:38.111 3.951 - 3.979: 99.6398% ( 1) 00:11:38.111 3.979 - 4.007: 99.6459% ( 1) 00:11:38.111 4.925 - 4.953: 99.6520% ( 1) 00:11:38.111 4.981 - 5.009: 99.6581% ( 1) 00:11:38.111 5.092 - 5.120: 99.6703% ( 2) 00:11:38.111 5.231 - 5.259: 99.6764% ( 1) 00:11:38.111 5.287 - 5.315: 99.6825% ( 1) 00:11:38.111 5.315 - 5.343: 99.6886% ( 1) 00:11:38.111 5.343 - 5.370: 99.6947% ( 1) 00:11:38.111 5.482 - 5.510: 99.7070% ( 2) 00:11:38.111 5.510 - 5.537: 99.7192% ( 2) 00:11:38.111 5.565 - 5.593: 99.7253% ( 1) 00:11:38.111 5.621 - 5.649: 99.7314% ( 1) 00:11:38.111 5.649 - 5.677: 99.7375% ( 1) 00:11:38.111 5.704 - 5.732: 99.7436% ( 1) 00:11:38.111 5.843 - 5.871: 99.7497% ( 1) 00:11:38.111 5.871 - 5.899: 99.7558% ( 1) 00:11:38.111 5.927 - 5.955: 99.7619% ( 1) 00:11:38.111 5.955 - 5.983: 99.7680% ( 1) 00:11:38.111 6.010 - 6.038: 99.7741% ( 1) 00:11:38.111 6.066 - 6.094: 99.7802% ( 1) 00:11:38.111 6.122 - 6.150: 99.7863% ( 1) 00:11:38.111 6.150 - 6.177: 99.7985% ( 2) 00:11:38.111 6.177 - 6.205: 99.8046% ( 1) 00:11:38.111 6.372 - 6.400: 99.8107% ( 1) 00:11:38.111 6.428 - 6.456: 99.8230% ( 2) 00:11:38.111 6.456 - 6.483: 99.8352% ( 2) 00:11:38.111 6.790 - 6.817: 99.8413% ( 1) 00:11:38.111 6.817 - 6.845: 99.8474% ( 1) 00:11:38.111 6.845 - 6.873: 99.8535% ( 1) 00:11:38.111 6.873 - 6.901: 99.8596% ( 1) 00:11:38.111 6.957 - 6.984: 99.8657% ( 1) 00:11:38.111 7.235 - 7.290: 99.8779% ( 2) 00:11:38.111 7.402 - 7.457: 99.8840% ( 1) 00:11:38.111 7.457 - 7.513: 99.8901% ( 1) 00:11:38.111 7.736 - 7.791: 99.8962% ( 1) 00:11:38.111 8.070 - 8.125: 99.9023% ( 1) 00:11:38.111 8.737 - 8.793: 99.9084% ( 1) 00:11:38.111 8.849 - 8.904: 99.9145% ( 1) 00:11:38.111 3960.654 - 3989.148: 99.9206% ( 1) 00:11:38.111 3989.148 - 4017.642: 100.0000% ( 13) 00:11:38.111 00:11:38.111 Complete histogram 00:11:38.111 ================== 00:11:38.111 Range in us Cumulative Count 00:11:38.111 1.809 - 1.823: 0.0366% ( 6) 00:11:38.111 1.823 - 1.837: 1.7399% ( 279) 00:11:38.111 1.837 - [2024-07-12 17:19:56.798250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:38.111 1.850: 5.6288% ( 637) 00:11:38.111 1.850 - 1.864: 7.5580% ( 316) 00:11:38.111 1.864 - 1.878: 9.8474% ( 375) 00:11:38.111 1.878 - 1.892: 40.5983% ( 5037) 00:11:38.111 1.892 - 1.906: 81.2332% ( 6656) 00:11:38.111 1.906 - 1.920: 90.8913% ( 1582) 00:11:38.111 1.920 - 1.934: 95.2564% ( 715) 00:11:38.111 1.934 - 1.948: 96.5568% ( 213) 00:11:38.111 1.948 - 1.962: 97.5031% ( 155) 00:11:38.111 1.962 - 1.976: 98.6203% ( 183) 00:11:38.111 1.976 - 1.990: 99.0781% ( 75) 00:11:38.111 1.990 - 2.003: 99.3040% ( 37) 00:11:38.111 2.003 - 2.017: 99.3346% ( 5) 00:11:38.111 2.017 - 2.031: 99.3468% ( 2) 00:11:38.111 2.045 - 2.059: 99.3590% ( 2) 00:11:38.111 2.059 - 2.073: 99.3651% ( 1) 00:11:38.111 2.087 - 2.101: 99.3712% ( 1) 00:11:38.111 2.240 - 2.254: 99.3773% ( 1) 00:11:38.111 2.323 - 2.337: 99.3834% ( 1) 00:11:38.111 2.351 - 2.365: 99.3895% ( 1) 00:11:38.111 3.395 - 3.409: 99.3956% ( 1) 00:11:38.111 3.548 - 3.562: 99.4017% ( 1) 00:11:38.111 3.645 - 3.673: 99.4139% ( 2) 00:11:38.111 3.729 - 3.757: 99.4200% ( 1) 00:11:38.111 3.812 - 3.840: 99.4261% ( 1) 00:11:38.111 3.868 - 3.896: 99.4322% ( 1) 00:11:38.111 4.118 - 4.146: 99.4383% ( 1) 00:11:38.111 4.146 - 4.174: 99.4505% ( 2) 00:11:38.111 4.536 - 4.563: 99.4567% ( 1) 00:11:38.111 4.563 - 4.591: 99.4628% ( 1) 00:11:38.111 4.758 - 4.786: 99.4689% ( 1) 00:11:38.111 4.786 - 4.814: 99.4750% ( 1) 00:11:38.111 5.148 - 5.176: 99.4811% ( 1) 00:11:38.111 5.259 - 5.287: 99.4872% ( 1) 00:11:38.111 6.623 - 6.650: 99.4933% ( 1) 00:11:38.111 7.513 - 7.569: 99.4994% ( 1) 00:11:38.111 8.515 - 8.570: 99.5055% ( 1) 00:11:38.111 8.737 - 8.793: 99.5116% ( 1) 00:11:38.111 9.183 - 9.238: 99.5177% ( 1) 00:11:38.111 9.739 - 9.795: 99.5238% ( 1) 00:11:38.111 3989.148 - 4017.642: 99.9939% ( 77) 00:11:38.111 4160.111 - 4188.605: 100.0000% ( 1) 00:11:38.111 00:11:38.111 17:19:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:38.111 17:19:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:38.111 17:19:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:38.111 17:19:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:38.111 17:19:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:38.370 [ 00:11:38.370 { 00:11:38.370 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:38.370 "subtype": "Discovery", 00:11:38.370 "listen_addresses": [], 00:11:38.370 "allow_any_host": true, 00:11:38.370 "hosts": [] 00:11:38.370 }, 00:11:38.370 { 00:11:38.370 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:38.370 "subtype": "NVMe", 00:11:38.370 "listen_addresses": [ 00:11:38.370 { 00:11:38.370 "trtype": "VFIOUSER", 00:11:38.370 "adrfam": "IPv4", 00:11:38.370 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:38.370 "trsvcid": "0" 00:11:38.370 } 00:11:38.370 ], 00:11:38.370 "allow_any_host": true, 00:11:38.370 "hosts": [], 00:11:38.370 "serial_number": "SPDK1", 00:11:38.370 "model_number": "SPDK bdev Controller", 00:11:38.370 "max_namespaces": 32, 00:11:38.370 "min_cntlid": 1, 00:11:38.370 "max_cntlid": 65519, 00:11:38.370 "namespaces": [ 00:11:38.370 { 00:11:38.370 "nsid": 1, 00:11:38.370 "bdev_name": "Malloc1", 00:11:38.370 "name": "Malloc1", 00:11:38.370 "nguid": "F0768B4BFB4C4DBD9649FFDE18163FD5", 00:11:38.370 "uuid": "f0768b4b-fb4c-4dbd-9649-ffde18163fd5" 00:11:38.370 } 00:11:38.370 ] 00:11:38.370 }, 00:11:38.370 { 00:11:38.370 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:38.370 "subtype": "NVMe", 00:11:38.370 "listen_addresses": [ 00:11:38.370 { 00:11:38.370 "trtype": "VFIOUSER", 00:11:38.370 "adrfam": "IPv4", 00:11:38.370 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:38.370 "trsvcid": "0" 00:11:38.370 } 00:11:38.370 ], 00:11:38.370 "allow_any_host": true, 00:11:38.370 "hosts": [], 00:11:38.370 "serial_number": "SPDK2", 00:11:38.370 "model_number": "SPDK bdev Controller", 00:11:38.370 "max_namespaces": 32, 00:11:38.370 "min_cntlid": 1, 00:11:38.370 "max_cntlid": 65519, 00:11:38.370 "namespaces": [ 00:11:38.370 { 00:11:38.370 "nsid": 1, 00:11:38.370 "bdev_name": "Malloc2", 00:11:38.370 "name": "Malloc2", 00:11:38.370 "nguid": "2C04BA2F2F1A4BA0858698CB955F07B8", 00:11:38.370 "uuid": "2c04ba2f-2f1a-4ba0-8586-98cb955f07b8" 00:11:38.370 } 00:11:38.370 ] 00:11:38.370 } 00:11:38.370 ] 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3992725 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:38.370 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:38.370 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.629 [2024-07-12 17:19:57.175074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:38.629 Malloc3 00:11:38.629 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:38.629 [2024-07-12 17:19:57.392740] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:38.888 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:38.888 Asynchronous Event Request test 00:11:38.888 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:38.888 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:38.888 Registering asynchronous event callbacks... 00:11:38.888 Starting namespace attribute notice tests for all controllers... 00:11:38.888 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:38.888 aer_cb - Changed Namespace 00:11:38.888 Cleaning up... 00:11:38.888 [ 00:11:38.888 { 00:11:38.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:38.888 "subtype": "Discovery", 00:11:38.888 "listen_addresses": [], 00:11:38.888 "allow_any_host": true, 00:11:38.888 "hosts": [] 00:11:38.888 }, 00:11:38.888 { 00:11:38.888 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:38.889 "subtype": "NVMe", 00:11:38.889 "listen_addresses": [ 00:11:38.889 { 00:11:38.889 "trtype": "VFIOUSER", 00:11:38.889 "adrfam": "IPv4", 00:11:38.889 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:38.889 "trsvcid": "0" 00:11:38.889 } 00:11:38.889 ], 00:11:38.889 "allow_any_host": true, 00:11:38.889 "hosts": [], 00:11:38.889 "serial_number": "SPDK1", 00:11:38.889 "model_number": "SPDK bdev Controller", 00:11:38.889 "max_namespaces": 32, 00:11:38.889 "min_cntlid": 1, 00:11:38.889 "max_cntlid": 65519, 00:11:38.889 "namespaces": [ 00:11:38.889 { 00:11:38.889 "nsid": 1, 00:11:38.889 "bdev_name": "Malloc1", 00:11:38.889 "name": "Malloc1", 00:11:38.889 "nguid": "F0768B4BFB4C4DBD9649FFDE18163FD5", 00:11:38.889 "uuid": "f0768b4b-fb4c-4dbd-9649-ffde18163fd5" 00:11:38.889 }, 00:11:38.889 { 00:11:38.889 "nsid": 2, 00:11:38.889 "bdev_name": "Malloc3", 00:11:38.889 "name": "Malloc3", 00:11:38.889 "nguid": "A5B08E1F1CC945D2A1B254BF852055C7", 00:11:38.889 "uuid": "a5b08e1f-1cc9-45d2-a1b2-54bf852055c7" 00:11:38.889 } 00:11:38.889 ] 00:11:38.889 }, 00:11:38.889 { 00:11:38.889 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:38.889 "subtype": "NVMe", 00:11:38.889 "listen_addresses": [ 00:11:38.889 { 00:11:38.889 "trtype": "VFIOUSER", 00:11:38.889 "adrfam": "IPv4", 00:11:38.889 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:38.889 "trsvcid": "0" 00:11:38.889 } 00:11:38.889 ], 00:11:38.889 "allow_any_host": true, 00:11:38.889 "hosts": [], 00:11:38.889 "serial_number": "SPDK2", 00:11:38.889 "model_number": "SPDK bdev Controller", 00:11:38.889 "max_namespaces": 32, 00:11:38.889 "min_cntlid": 1, 00:11:38.889 "max_cntlid": 65519, 00:11:38.889 "namespaces": [ 00:11:38.889 { 00:11:38.889 "nsid": 1, 00:11:38.889 "bdev_name": "Malloc2", 00:11:38.889 "name": "Malloc2", 00:11:38.889 "nguid": "2C04BA2F2F1A4BA0858698CB955F07B8", 00:11:38.889 "uuid": "2c04ba2f-2f1a-4ba0-8586-98cb955f07b8" 00:11:38.889 } 00:11:38.889 ] 00:11:38.889 } 00:11:38.889 ] 00:11:38.889 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3992725 00:11:38.889 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:38.889 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:38.889 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:38.889 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:38.889 [2024-07-12 17:19:57.601484] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:11:38.889 [2024-07-12 17:19:57.601509] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3992837 ] 00:11:38.889 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.889 [2024-07-12 17:19:57.628770] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:38.889 [2024-07-12 17:19:57.638605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:38.889 [2024-07-12 17:19:57.638625] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb93cbb3000 00:11:38.889 [2024-07-12 17:19:57.639613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.640619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.641625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.642636] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.643643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.644649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.645656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.646666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:38.889 [2024-07-12 17:19:57.647681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:38.889 [2024-07-12 17:19:57.647690] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb93cba8000 00:11:38.889 [2024-07-12 17:19:57.648629] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:38.889 [2024-07-12 17:19:57.661144] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:38.889 [2024-07-12 17:19:57.661169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:38.889 [2024-07-12 17:19:57.663225] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:38.889 [2024-07-12 17:19:57.663264] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:38.889 [2024-07-12 17:19:57.663331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:38.889 [2024-07-12 17:19:57.663345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:38.889 [2024-07-12 17:19:57.663350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:38.889 [2024-07-12 17:19:57.664224] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:38.889 [2024-07-12 17:19:57.664233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:38.889 [2024-07-12 17:19:57.664239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:38.889 [2024-07-12 17:19:57.665230] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:38.889 [2024-07-12 17:19:57.665239] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:38.889 [2024-07-12 17:19:57.665245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:38.889 [2024-07-12 17:19:57.666234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:38.889 [2024-07-12 17:19:57.666243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:39.149 [2024-07-12 17:19:57.667239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:39.149 [2024-07-12 17:19:57.667249] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:39.149 [2024-07-12 17:19:57.667253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:39.149 [2024-07-12 17:19:57.667259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:39.149 [2024-07-12 17:19:57.667364] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:39.149 [2024-07-12 17:19:57.667369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:39.149 [2024-07-12 17:19:57.667373] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:39.149 [2024-07-12 17:19:57.668246] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:39.149 [2024-07-12 17:19:57.669251] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:39.149 [2024-07-12 17:19:57.670256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:39.149 [2024-07-12 17:19:57.671264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:39.149 [2024-07-12 17:19:57.671302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:39.149 [2024-07-12 17:19:57.672274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:39.149 [2024-07-12 17:19:57.672283] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:39.149 [2024-07-12 17:19:57.672287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.672304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:39.149 [2024-07-12 17:19:57.672311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.672321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.149 [2024-07-12 17:19:57.672326] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.149 [2024-07-12 17:19:57.672336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.678384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.678396] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:39.149 [2024-07-12 17:19:57.678403] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:39.149 [2024-07-12 17:19:57.678407] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:39.149 [2024-07-12 17:19:57.678412] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:39.149 [2024-07-12 17:19:57.678416] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:39.149 [2024-07-12 17:19:57.678420] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:39.149 [2024-07-12 17:19:57.678424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.678431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.678440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.686382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.686397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.149 [2024-07-12 17:19:57.686407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.149 [2024-07-12 17:19:57.686414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.149 [2024-07-12 17:19:57.686422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.149 [2024-07-12 17:19:57.686426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.686434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.686442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.694381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.694389] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:39.149 [2024-07-12 17:19:57.694394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.694400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.694405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.694414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.702383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.702436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.702444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.702450] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:39.149 [2024-07-12 17:19:57.702455] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:39.149 [2024-07-12 17:19:57.702462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.710382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.710394] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:39.149 [2024-07-12 17:19:57.710402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.710409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.710416] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.149 [2024-07-12 17:19:57.710420] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.149 [2024-07-12 17:19:57.710426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.718383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.718398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.718405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.718412] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:39.149 [2024-07-12 17:19:57.718416] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.149 [2024-07-12 17:19:57.718422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.149 [2024-07-12 17:19:57.726384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:39.149 [2024-07-12 17:19:57.726394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:39.149 [2024-07-12 17:19:57.726427] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:39.150 [2024-07-12 17:19:57.726431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:39.150 [2024-07-12 17:19:57.726436] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:39.150 [2024-07-12 17:19:57.726450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.734383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.734396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.742383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.742396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.750384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.750396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.758384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.758400] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:39.150 [2024-07-12 17:19:57.758405] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:39.150 [2024-07-12 17:19:57.758410] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:39.150 [2024-07-12 17:19:57.758413] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:39.150 [2024-07-12 17:19:57.758420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:39.150 [2024-07-12 17:19:57.758426] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:39.150 [2024-07-12 17:19:57.758430] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:39.150 [2024-07-12 17:19:57.758435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.758441] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:39.150 [2024-07-12 17:19:57.758445] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:39.150 [2024-07-12 17:19:57.758451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.758457] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:39.150 [2024-07-12 17:19:57.758461] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:39.150 [2024-07-12 17:19:57.758466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:39.150 [2024-07-12 17:19:57.766384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.766397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.766406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:39.150 [2024-07-12 17:19:57.766412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:39.150 ===================================================== 00:11:39.150 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:39.150 ===================================================== 00:11:39.150 Controller Capabilities/Features 00:11:39.150 ================================ 00:11:39.150 Vendor ID: 4e58 00:11:39.150 Subsystem Vendor ID: 4e58 00:11:39.150 Serial Number: SPDK2 00:11:39.150 Model Number: SPDK bdev Controller 00:11:39.150 Firmware Version: 24.09 00:11:39.150 Recommended Arb Burst: 6 00:11:39.150 IEEE OUI Identifier: 8d 6b 50 00:11:39.150 Multi-path I/O 00:11:39.150 May have multiple subsystem ports: Yes 00:11:39.150 May have multiple controllers: Yes 00:11:39.150 Associated with SR-IOV VF: No 00:11:39.150 Max Data Transfer Size: 131072 00:11:39.150 Max Number of Namespaces: 32 00:11:39.150 Max Number of I/O Queues: 127 00:11:39.150 NVMe Specification Version (VS): 1.3 00:11:39.150 NVMe Specification Version (Identify): 1.3 00:11:39.150 Maximum Queue Entries: 256 00:11:39.150 Contiguous Queues Required: Yes 00:11:39.150 Arbitration Mechanisms Supported 00:11:39.150 Weighted Round Robin: Not Supported 00:11:39.150 Vendor Specific: Not Supported 00:11:39.150 Reset Timeout: 15000 ms 00:11:39.150 Doorbell Stride: 4 bytes 00:11:39.150 NVM Subsystem Reset: Not Supported 00:11:39.150 Command Sets Supported 00:11:39.150 NVM Command Set: Supported 00:11:39.150 Boot Partition: Not Supported 00:11:39.150 Memory Page Size Minimum: 4096 bytes 00:11:39.150 Memory Page Size Maximum: 4096 bytes 00:11:39.150 Persistent Memory Region: Not Supported 00:11:39.150 Optional Asynchronous Events Supported 00:11:39.150 Namespace Attribute Notices: Supported 00:11:39.150 Firmware Activation Notices: Not Supported 00:11:39.150 ANA Change Notices: Not Supported 00:11:39.150 PLE Aggregate Log Change Notices: Not Supported 00:11:39.150 LBA Status Info Alert Notices: Not Supported 00:11:39.150 EGE Aggregate Log Change Notices: Not Supported 00:11:39.150 Normal NVM Subsystem Shutdown event: Not Supported 00:11:39.150 Zone Descriptor Change Notices: Not Supported 00:11:39.150 Discovery Log Change Notices: Not Supported 00:11:39.150 Controller Attributes 00:11:39.150 128-bit Host Identifier: Supported 00:11:39.150 Non-Operational Permissive Mode: Not Supported 00:11:39.150 NVM Sets: Not Supported 00:11:39.150 Read Recovery Levels: Not Supported 00:11:39.150 Endurance Groups: Not Supported 00:11:39.150 Predictable Latency Mode: Not Supported 00:11:39.150 Traffic Based Keep ALive: Not Supported 00:11:39.150 Namespace Granularity: Not Supported 00:11:39.150 SQ Associations: Not Supported 00:11:39.150 UUID List: Not Supported 00:11:39.150 Multi-Domain Subsystem: Not Supported 00:11:39.150 Fixed Capacity Management: Not Supported 00:11:39.150 Variable Capacity Management: Not Supported 00:11:39.150 Delete Endurance Group: Not Supported 00:11:39.150 Delete NVM Set: Not Supported 00:11:39.150 Extended LBA Formats Supported: Not Supported 00:11:39.150 Flexible Data Placement Supported: Not Supported 00:11:39.150 00:11:39.150 Controller Memory Buffer Support 00:11:39.150 ================================ 00:11:39.150 Supported: No 00:11:39.150 00:11:39.150 Persistent Memory Region Support 00:11:39.150 ================================ 00:11:39.150 Supported: No 00:11:39.150 00:11:39.150 Admin Command Set Attributes 00:11:39.150 ============================ 00:11:39.150 Security Send/Receive: Not Supported 00:11:39.150 Format NVM: Not Supported 00:11:39.150 Firmware Activate/Download: Not Supported 00:11:39.150 Namespace Management: Not Supported 00:11:39.150 Device Self-Test: Not Supported 00:11:39.150 Directives: Not Supported 00:11:39.150 NVMe-MI: Not Supported 00:11:39.150 Virtualization Management: Not Supported 00:11:39.150 Doorbell Buffer Config: Not Supported 00:11:39.150 Get LBA Status Capability: Not Supported 00:11:39.150 Command & Feature Lockdown Capability: Not Supported 00:11:39.150 Abort Command Limit: 4 00:11:39.150 Async Event Request Limit: 4 00:11:39.150 Number of Firmware Slots: N/A 00:11:39.150 Firmware Slot 1 Read-Only: N/A 00:11:39.150 Firmware Activation Without Reset: N/A 00:11:39.150 Multiple Update Detection Support: N/A 00:11:39.150 Firmware Update Granularity: No Information Provided 00:11:39.150 Per-Namespace SMART Log: No 00:11:39.150 Asymmetric Namespace Access Log Page: Not Supported 00:11:39.150 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:39.150 Command Effects Log Page: Supported 00:11:39.150 Get Log Page Extended Data: Supported 00:11:39.150 Telemetry Log Pages: Not Supported 00:11:39.150 Persistent Event Log Pages: Not Supported 00:11:39.150 Supported Log Pages Log Page: May Support 00:11:39.150 Commands Supported & Effects Log Page: Not Supported 00:11:39.150 Feature Identifiers & Effects Log Page:May Support 00:11:39.150 NVMe-MI Commands & Effects Log Page: May Support 00:11:39.150 Data Area 4 for Telemetry Log: Not Supported 00:11:39.150 Error Log Page Entries Supported: 128 00:11:39.150 Keep Alive: Supported 00:11:39.150 Keep Alive Granularity: 10000 ms 00:11:39.150 00:11:39.150 NVM Command Set Attributes 00:11:39.150 ========================== 00:11:39.150 Submission Queue Entry Size 00:11:39.150 Max: 64 00:11:39.150 Min: 64 00:11:39.150 Completion Queue Entry Size 00:11:39.150 Max: 16 00:11:39.150 Min: 16 00:11:39.150 Number of Namespaces: 32 00:11:39.150 Compare Command: Supported 00:11:39.150 Write Uncorrectable Command: Not Supported 00:11:39.150 Dataset Management Command: Supported 00:11:39.150 Write Zeroes Command: Supported 00:11:39.150 Set Features Save Field: Not Supported 00:11:39.150 Reservations: Not Supported 00:11:39.150 Timestamp: Not Supported 00:11:39.150 Copy: Supported 00:11:39.150 Volatile Write Cache: Present 00:11:39.150 Atomic Write Unit (Normal): 1 00:11:39.150 Atomic Write Unit (PFail): 1 00:11:39.150 Atomic Compare & Write Unit: 1 00:11:39.150 Fused Compare & Write: Supported 00:11:39.150 Scatter-Gather List 00:11:39.150 SGL Command Set: Supported (Dword aligned) 00:11:39.150 SGL Keyed: Not Supported 00:11:39.150 SGL Bit Bucket Descriptor: Not Supported 00:11:39.150 SGL Metadata Pointer: Not Supported 00:11:39.150 Oversized SGL: Not Supported 00:11:39.150 SGL Metadata Address: Not Supported 00:11:39.150 SGL Offset: Not Supported 00:11:39.150 Transport SGL Data Block: Not Supported 00:11:39.150 Replay Protected Memory Block: Not Supported 00:11:39.150 00:11:39.151 Firmware Slot Information 00:11:39.151 ========================= 00:11:39.151 Active slot: 1 00:11:39.151 Slot 1 Firmware Revision: 24.09 00:11:39.151 00:11:39.151 00:11:39.151 Commands Supported and Effects 00:11:39.151 ============================== 00:11:39.151 Admin Commands 00:11:39.151 -------------- 00:11:39.151 Get Log Page (02h): Supported 00:11:39.151 Identify (06h): Supported 00:11:39.151 Abort (08h): Supported 00:11:39.151 Set Features (09h): Supported 00:11:39.151 Get Features (0Ah): Supported 00:11:39.151 Asynchronous Event Request (0Ch): Supported 00:11:39.151 Keep Alive (18h): Supported 00:11:39.151 I/O Commands 00:11:39.151 ------------ 00:11:39.151 Flush (00h): Supported LBA-Change 00:11:39.151 Write (01h): Supported LBA-Change 00:11:39.151 Read (02h): Supported 00:11:39.151 Compare (05h): Supported 00:11:39.151 Write Zeroes (08h): Supported LBA-Change 00:11:39.151 Dataset Management (09h): Supported LBA-Change 00:11:39.151 Copy (19h): Supported LBA-Change 00:11:39.151 00:11:39.151 Error Log 00:11:39.151 ========= 00:11:39.151 00:11:39.151 Arbitration 00:11:39.151 =========== 00:11:39.151 Arbitration Burst: 1 00:11:39.151 00:11:39.151 Power Management 00:11:39.151 ================ 00:11:39.151 Number of Power States: 1 00:11:39.151 Current Power State: Power State #0 00:11:39.151 Power State #0: 00:11:39.151 Max Power: 0.00 W 00:11:39.151 Non-Operational State: Operational 00:11:39.151 Entry Latency: Not Reported 00:11:39.151 Exit Latency: Not Reported 00:11:39.151 Relative Read Throughput: 0 00:11:39.151 Relative Read Latency: 0 00:11:39.151 Relative Write Throughput: 0 00:11:39.151 Relative Write Latency: 0 00:11:39.151 Idle Power: Not Reported 00:11:39.151 Active Power: Not Reported 00:11:39.151 Non-Operational Permissive Mode: Not Supported 00:11:39.151 00:11:39.151 Health Information 00:11:39.151 ================== 00:11:39.151 Critical Warnings: 00:11:39.151 Available Spare Space: OK 00:11:39.151 Temperature: OK 00:11:39.151 Device Reliability: OK 00:11:39.151 Read Only: No 00:11:39.151 Volatile Memory Backup: OK 00:11:39.151 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:39.151 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:39.151 Available Spare: 0% 00:11:39.151 Available Sp[2024-07-12 17:19:57.766495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:39.151 [2024-07-12 17:19:57.774382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:39.151 [2024-07-12 17:19:57.774414] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:39.151 [2024-07-12 17:19:57.774423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.151 [2024-07-12 17:19:57.774428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.151 [2024-07-12 17:19:57.774434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.151 [2024-07-12 17:19:57.774440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.151 [2024-07-12 17:19:57.774488] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:39.151 [2024-07-12 17:19:57.774498] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:39.151 [2024-07-12 17:19:57.775491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:39.151 [2024-07-12 17:19:57.775534] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:39.151 [2024-07-12 17:19:57.775540] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:39.151 [2024-07-12 17:19:57.776505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:39.151 [2024-07-12 17:19:57.776517] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:39.151 [2024-07-12 17:19:57.776563] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:39.151 [2024-07-12 17:19:57.779383] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:39.151 are Threshold: 0% 00:11:39.151 Life Percentage Used: 0% 00:11:39.151 Data Units Read: 0 00:11:39.151 Data Units Written: 0 00:11:39.151 Host Read Commands: 0 00:11:39.151 Host Write Commands: 0 00:11:39.151 Controller Busy Time: 0 minutes 00:11:39.151 Power Cycles: 0 00:11:39.151 Power On Hours: 0 hours 00:11:39.151 Unsafe Shutdowns: 0 00:11:39.151 Unrecoverable Media Errors: 0 00:11:39.151 Lifetime Error Log Entries: 0 00:11:39.151 Warning Temperature Time: 0 minutes 00:11:39.151 Critical Temperature Time: 0 minutes 00:11:39.151 00:11:39.151 Number of Queues 00:11:39.151 ================ 00:11:39.151 Number of I/O Submission Queues: 127 00:11:39.151 Number of I/O Completion Queues: 127 00:11:39.151 00:11:39.151 Active Namespaces 00:11:39.151 ================= 00:11:39.151 Namespace ID:1 00:11:39.151 Error Recovery Timeout: Unlimited 00:11:39.151 Command Set Identifier: NVM (00h) 00:11:39.151 Deallocate: Supported 00:11:39.151 Deallocated/Unwritten Error: Not Supported 00:11:39.151 Deallocated Read Value: Unknown 00:11:39.151 Deallocate in Write Zeroes: Not Supported 00:11:39.151 Deallocated Guard Field: 0xFFFF 00:11:39.151 Flush: Supported 00:11:39.151 Reservation: Supported 00:11:39.151 Namespace Sharing Capabilities: Multiple Controllers 00:11:39.151 Size (in LBAs): 131072 (0GiB) 00:11:39.151 Capacity (in LBAs): 131072 (0GiB) 00:11:39.151 Utilization (in LBAs): 131072 (0GiB) 00:11:39.151 NGUID: 2C04BA2F2F1A4BA0858698CB955F07B8 00:11:39.151 UUID: 2c04ba2f-2f1a-4ba0-8586-98cb955f07b8 00:11:39.151 Thin Provisioning: Not Supported 00:11:39.151 Per-NS Atomic Units: Yes 00:11:39.151 Atomic Boundary Size (Normal): 0 00:11:39.151 Atomic Boundary Size (PFail): 0 00:11:39.151 Atomic Boundary Offset: 0 00:11:39.151 Maximum Single Source Range Length: 65535 00:11:39.151 Maximum Copy Length: 65535 00:11:39.151 Maximum Source Range Count: 1 00:11:39.151 NGUID/EUI64 Never Reused: No 00:11:39.151 Namespace Write Protected: No 00:11:39.151 Number of LBA Formats: 1 00:11:39.151 Current LBA Format: LBA Format #00 00:11:39.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:39.151 00:11:39.151 17:19:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:39.151 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.410 [2024-07-12 17:19:57.995715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:44.684 Initializing NVMe Controllers 00:11:44.684 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:44.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:44.684 Initialization complete. Launching workers. 00:11:44.684 ======================================================== 00:11:44.684 Latency(us) 00:11:44.684 Device Information : IOPS MiB/s Average min max 00:11:44.684 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39925.84 155.96 3205.75 956.94 8599.10 00:11:44.684 ======================================================== 00:11:44.684 Total : 39925.84 155.96 3205.75 956.94 8599.10 00:11:44.684 00:11:44.684 [2024-07-12 17:20:03.100642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:44.684 17:20:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:44.684 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.684 [2024-07-12 17:20:03.319262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:49.957 Initializing NVMe Controllers 00:11:49.957 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:49.957 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:49.957 Initialization complete. Launching workers. 00:11:49.957 ======================================================== 00:11:49.957 Latency(us) 00:11:49.957 Device Information : IOPS MiB/s Average min max 00:11:49.957 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39956.18 156.08 3203.76 983.15 6607.61 00:11:49.957 ======================================================== 00:11:49.957 Total : 39956.18 156.08 3203.76 983.15 6607.61 00:11:49.957 00:11:49.957 [2024-07-12 17:20:08.340322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:49.957 17:20:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:49.957 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.957 [2024-07-12 17:20:08.526937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:55.231 [2024-07-12 17:20:13.666474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:55.231 Initializing NVMe Controllers 00:11:55.231 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:55.231 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:55.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:55.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:55.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:55.232 Initialization complete. Launching workers. 00:11:55.232 Starting thread on core 2 00:11:55.232 Starting thread on core 3 00:11:55.232 Starting thread on core 1 00:11:55.232 17:20:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:55.232 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.232 [2024-07-12 17:20:13.952872] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.520 [2024-07-12 17:20:17.022650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.520 Initializing NVMe Controllers 00:11:58.520 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.520 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.520 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:58.520 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:58.520 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:58.520 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:58.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:58.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:58.520 Initialization complete. Launching workers. 00:11:58.520 Starting thread on core 1 with urgent priority queue 00:11:58.520 Starting thread on core 2 with urgent priority queue 00:11:58.520 Starting thread on core 3 with urgent priority queue 00:11:58.520 Starting thread on core 0 with urgent priority queue 00:11:58.521 SPDK bdev Controller (SPDK2 ) core 0: 8246.00 IO/s 12.13 secs/100000 ios 00:11:58.521 SPDK bdev Controller (SPDK2 ) core 1: 6079.00 IO/s 16.45 secs/100000 ios 00:11:58.521 SPDK bdev Controller (SPDK2 ) core 2: 6867.33 IO/s 14.56 secs/100000 ios 00:11:58.521 SPDK bdev Controller (SPDK2 ) core 3: 8165.67 IO/s 12.25 secs/100000 ios 00:11:58.521 ======================================================== 00:11:58.521 00:11:58.521 17:20:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:58.521 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.521 [2024-07-12 17:20:17.294819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.779 Initializing NVMe Controllers 00:11:58.779 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.779 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:58.779 Namespace ID: 1 size: 0GB 00:11:58.779 Initialization complete. 00:11:58.779 INFO: using host memory buffer for IO 00:11:58.779 Hello world! 00:11:58.779 [2024-07-12 17:20:17.304891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.779 17:20:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:58.779 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.037 [2024-07-12 17:20:17.564417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:59.973 Initializing NVMe Controllers 00:11:59.973 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:59.973 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:59.973 Initialization complete. Launching workers. 00:11:59.973 submit (in ns) avg, min, max = 6663.2, 3225.2, 3999169.6 00:11:59.973 complete (in ns) avg, min, max = 21066.8, 1807.0, 6988936.5 00:11:59.973 00:11:59.973 Submit histogram 00:11:59.973 ================ 00:11:59.973 Range in us Cumulative Count 00:11:59.973 3.214 - 3.228: 0.0061% ( 1) 00:11:59.973 3.242 - 3.256: 0.0488% ( 7) 00:11:59.973 3.256 - 3.270: 0.0793% ( 5) 00:11:59.973 3.270 - 3.283: 0.1220% ( 7) 00:11:59.973 3.283 - 3.297: 0.2013% ( 13) 00:11:59.973 3.297 - 3.311: 0.5611% ( 59) 00:11:59.973 3.311 - 3.325: 1.5918% ( 169) 00:11:59.973 3.325 - 3.339: 3.0615% ( 241) 00:11:59.973 3.339 - 3.353: 6.0987% ( 498) 00:11:59.973 3.353 - 3.367: 10.5446% ( 729) 00:11:59.973 3.367 - 3.381: 15.9968% ( 894) 00:11:59.973 3.381 - 3.395: 21.9247% ( 972) 00:11:59.973 3.395 - 3.409: 28.1454% ( 1020) 00:11:59.973 3.409 - 3.423: 33.6037% ( 895) 00:11:59.973 3.423 - 3.437: 38.3302% ( 775) 00:11:59.973 3.437 - 3.450: 43.8617% ( 907) 00:11:59.973 3.450 - 3.464: 49.3078% ( 893) 00:11:59.973 3.464 - 3.478: 53.9428% ( 760) 00:11:59.973 3.478 - 3.492: 58.2972% ( 714) 00:11:59.973 3.492 - 3.506: 64.1337% ( 957) 00:11:59.973 3.506 - 3.520: 69.7506% ( 921) 00:11:59.973 3.520 - 3.534: 73.8428% ( 671) 00:11:59.973 3.534 - 3.548: 78.3436% ( 738) 00:11:59.973 3.548 - 3.562: 82.3077% ( 650) 00:11:59.973 3.562 - 3.590: 86.5707% ( 699) 00:11:59.973 3.590 - 3.617: 88.0649% ( 245) 00:11:59.973 3.617 - 3.645: 88.9797% ( 150) 00:11:59.973 3.645 - 3.673: 90.4312% ( 238) 00:11:59.973 3.673 - 3.701: 92.1144% ( 276) 00:11:59.973 3.701 - 3.729: 93.7367% ( 266) 00:11:59.973 3.729 - 3.757: 95.4565% ( 282) 00:11:59.973 3.757 - 3.784: 96.7860% ( 218) 00:11:59.973 3.784 - 3.812: 97.9813% ( 196) 00:11:59.973 3.812 - 3.840: 98.7376% ( 124) 00:11:59.973 3.840 - 3.868: 99.2194% ( 79) 00:11:59.973 3.868 - 3.896: 99.4816% ( 43) 00:11:59.973 3.896 - 3.923: 99.5914% ( 18) 00:11:59.973 3.923 - 3.951: 99.6402% ( 8) 00:11:59.973 3.951 - 3.979: 99.6646% ( 4) 00:11:59.973 5.287 - 5.315: 99.6707% ( 1) 00:11:59.973 5.370 - 5.398: 99.6768% ( 1) 00:11:59.973 5.510 - 5.537: 99.6829% ( 1) 00:11:59.973 5.537 - 5.565: 99.6890% ( 1) 00:11:59.973 5.677 - 5.704: 99.6951% ( 1) 00:11:59.973 5.760 - 5.788: 99.7012% ( 1) 00:11:59.973 5.927 - 5.955: 99.7073% ( 1) 00:11:59.973 6.010 - 6.038: 99.7256% ( 3) 00:11:59.973 6.094 - 6.122: 99.7317% ( 1) 00:11:59.973 6.177 - 6.205: 99.7378% ( 1) 00:11:59.973 6.205 - 6.233: 99.7439% ( 1) 00:11:59.973 6.261 - 6.289: 99.7500% ( 1) 00:11:59.973 6.317 - 6.344: 99.7561% ( 1) 00:11:59.973 6.344 - 6.372: 99.7622% ( 1) 00:11:59.973 6.400 - 6.428: 99.7743% ( 2) 00:11:59.973 6.428 - 6.456: 99.7804% ( 1) 00:11:59.973 6.456 - 6.483: 99.7865% ( 1) 00:11:59.973 6.539 - 6.567: 99.7926% ( 1) 00:11:59.973 6.650 - 6.678: 99.7987% ( 1) 00:11:59.973 6.678 - 6.706: 99.8048% ( 1) 00:11:59.973 6.790 - 6.817: 99.8109% ( 1) 00:11:59.973 6.817 - 6.845: 99.8170% ( 1) 00:11:59.973 6.957 - 6.984: 99.8353% ( 3) 00:11:59.973 7.012 - 7.040: 99.8414% ( 1) 00:11:59.973 7.040 - 7.068: 99.8475% ( 1) 00:11:59.973 7.123 - 7.179: 99.8536% ( 1) 00:11:59.973 7.346 - 7.402: 99.8597% ( 1) 00:11:59.973 7.402 - 7.457: 99.8658% ( 1) 00:11:59.973 7.457 - 7.513: 99.8719% ( 1) 00:11:59.973 7.736 - 7.791: 99.8780% ( 1) 00:11:59.973 7.847 - 7.903: 99.8841% ( 1) 00:11:59.973 8.181 - 8.237: 99.8902% ( 1) 00:11:59.973 10.741 - 10.797: 99.8963% ( 1) 00:11:59.973 13.690 - 13.746: 99.9024% ( 1) 00:11:59.973 13.969 - 14.024: 99.9085% ( 1) 00:11:59.973 19.144 - 19.256: 99.9146% ( 1) 00:11:59.973 19.256 - 19.367: 99.9207% ( 1) 00:11:59.973 3989.148 - 4017.642: 100.0000% ( 13) 00:11:59.973 00:11:59.973 Complete histogram 00:11:59.973 ================== 00:11:59.973 Range in us Cumulative Count 00:11:59.973 1.795 - [2024-07-12 17:20:18.659411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:59.973 1.809: 0.0061% ( 1) 00:11:59.973 1.809 - 1.823: 0.0183% ( 2) 00:11:59.973 1.823 - 1.837: 0.0427% ( 4) 00:11:59.973 1.837 - 1.850: 0.2378% ( 32) 00:11:59.973 1.850 - 1.864: 1.0734% ( 137) 00:11:59.973 1.864 - 1.878: 2.1833% ( 182) 00:11:59.973 1.878 - 1.892: 3.6409% ( 239) 00:11:59.973 1.892 - 1.906: 19.4060% ( 2585) 00:11:59.973 1.906 - 1.920: 75.8919% ( 9262) 00:11:59.973 1.920 - 1.934: 92.4620% ( 2717) 00:11:59.973 1.934 - 1.948: 95.3833% ( 479) 00:11:59.973 1.948 - 1.962: 96.3408% ( 157) 00:11:59.973 1.962 - 1.976: 96.9263% ( 96) 00:11:59.973 1.976 - 1.990: 98.0301% ( 181) 00:11:59.973 1.990 - 2.003: 98.8352% ( 132) 00:11:59.973 2.003 - 2.017: 99.1096% ( 45) 00:11:59.973 2.017 - 2.031: 99.2072% ( 16) 00:11:59.973 2.031 - 2.045: 99.2682% ( 10) 00:11:59.973 2.045 - 2.059: 99.2987% ( 5) 00:11:59.973 2.059 - 2.073: 99.3230% ( 4) 00:11:59.973 2.073 - 2.087: 99.3413% ( 3) 00:11:59.973 2.087 - 2.101: 99.3657% ( 4) 00:11:59.973 2.101 - 2.115: 99.3840% ( 3) 00:11:59.973 2.184 - 2.198: 99.3901% ( 1) 00:11:59.973 2.282 - 2.296: 99.3962% ( 1) 00:11:59.973 2.337 - 2.351: 99.4023% ( 1) 00:11:59.973 2.393 - 2.407: 99.4084% ( 1) 00:11:59.973 3.548 - 3.562: 99.4145% ( 1) 00:11:59.973 4.257 - 4.285: 99.4206% ( 1) 00:11:59.973 4.285 - 4.313: 99.4267% ( 1) 00:11:59.973 4.341 - 4.369: 99.4328% ( 1) 00:11:59.973 4.536 - 4.563: 99.4389% ( 1) 00:11:59.973 4.563 - 4.591: 99.4450% ( 1) 00:11:59.973 4.758 - 4.786: 99.4511% ( 1) 00:11:59.973 4.814 - 4.842: 99.4572% ( 1) 00:11:59.973 4.870 - 4.897: 99.4633% ( 1) 00:11:59.973 4.953 - 4.981: 99.4755% ( 2) 00:11:59.973 5.064 - 5.092: 99.4816% ( 1) 00:11:59.973 5.287 - 5.315: 99.4877% ( 1) 00:11:59.973 5.426 - 5.454: 99.4938% ( 1) 00:11:59.973 5.482 - 5.510: 99.4999% ( 1) 00:11:59.973 6.038 - 6.066: 99.5060% ( 1) 00:11:59.973 6.066 - 6.094: 99.5121% ( 1) 00:11:59.973 6.790 - 6.817: 99.5182% ( 1) 00:11:59.973 7.680 - 7.736: 99.5243% ( 1) 00:11:59.973 3405.023 - 3419.270: 99.5304% ( 1) 00:11:59.973 3989.148 - 4017.642: 99.9939% ( 76) 00:11:59.973 6981.009 - 7009.503: 100.0000% ( 1) 00:11:59.973 00:11:59.973 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:59.973 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:59.973 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:59.973 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:59.973 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:00.232 [ 00:12:00.232 { 00:12:00.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.232 "subtype": "Discovery", 00:12:00.232 "listen_addresses": [], 00:12:00.232 "allow_any_host": true, 00:12:00.232 "hosts": [] 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:00.232 "subtype": "NVMe", 00:12:00.232 "listen_addresses": [ 00:12:00.232 { 00:12:00.232 "trtype": "VFIOUSER", 00:12:00.232 "adrfam": "IPv4", 00:12:00.232 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:00.232 "trsvcid": "0" 00:12:00.232 } 00:12:00.232 ], 00:12:00.232 "allow_any_host": true, 00:12:00.232 "hosts": [], 00:12:00.232 "serial_number": "SPDK1", 00:12:00.232 "model_number": "SPDK bdev Controller", 00:12:00.232 "max_namespaces": 32, 00:12:00.232 "min_cntlid": 1, 00:12:00.232 "max_cntlid": 65519, 00:12:00.232 "namespaces": [ 00:12:00.232 { 00:12:00.232 "nsid": 1, 00:12:00.232 "bdev_name": "Malloc1", 00:12:00.232 "name": "Malloc1", 00:12:00.232 "nguid": "F0768B4BFB4C4DBD9649FFDE18163FD5", 00:12:00.232 "uuid": "f0768b4b-fb4c-4dbd-9649-ffde18163fd5" 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "nsid": 2, 00:12:00.232 "bdev_name": "Malloc3", 00:12:00.232 "name": "Malloc3", 00:12:00.232 "nguid": "A5B08E1F1CC945D2A1B254BF852055C7", 00:12:00.232 "uuid": "a5b08e1f-1cc9-45d2-a1b2-54bf852055c7" 00:12:00.232 } 00:12:00.232 ] 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:00.232 "subtype": "NVMe", 00:12:00.232 "listen_addresses": [ 00:12:00.232 { 00:12:00.232 "trtype": "VFIOUSER", 00:12:00.232 "adrfam": "IPv4", 00:12:00.232 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:00.232 "trsvcid": "0" 00:12:00.232 } 00:12:00.232 ], 00:12:00.232 "allow_any_host": true, 00:12:00.232 "hosts": [], 00:12:00.232 "serial_number": "SPDK2", 00:12:00.232 "model_number": "SPDK bdev Controller", 00:12:00.232 "max_namespaces": 32, 00:12:00.232 "min_cntlid": 1, 00:12:00.232 "max_cntlid": 65519, 00:12:00.232 "namespaces": [ 00:12:00.232 { 00:12:00.232 "nsid": 1, 00:12:00.232 "bdev_name": "Malloc2", 00:12:00.232 "name": "Malloc2", 00:12:00.232 "nguid": "2C04BA2F2F1A4BA0858698CB955F07B8", 00:12:00.232 "uuid": "2c04ba2f-2f1a-4ba0-8586-98cb955f07b8" 00:12:00.232 } 00:12:00.232 ] 00:12:00.232 } 00:12:00.232 ] 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3996295 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:00.232 17:20:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:00.232 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.490 [2024-07-12 17:20:19.017128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:00.490 Malloc4 00:12:00.490 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:00.490 [2024-07-12 17:20:19.250899] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:00.749 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:00.749 Asynchronous Event Request test 00:12:00.749 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:00.749 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:00.749 Registering asynchronous event callbacks... 00:12:00.749 Starting namespace attribute notice tests for all controllers... 00:12:00.749 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:00.749 aer_cb - Changed Namespace 00:12:00.749 Cleaning up... 00:12:00.749 [ 00:12:00.749 { 00:12:00.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.749 "subtype": "Discovery", 00:12:00.749 "listen_addresses": [], 00:12:00.749 "allow_any_host": true, 00:12:00.749 "hosts": [] 00:12:00.749 }, 00:12:00.749 { 00:12:00.749 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:00.749 "subtype": "NVMe", 00:12:00.749 "listen_addresses": [ 00:12:00.749 { 00:12:00.749 "trtype": "VFIOUSER", 00:12:00.749 "adrfam": "IPv4", 00:12:00.749 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:00.749 "trsvcid": "0" 00:12:00.749 } 00:12:00.749 ], 00:12:00.749 "allow_any_host": true, 00:12:00.749 "hosts": [], 00:12:00.749 "serial_number": "SPDK1", 00:12:00.749 "model_number": "SPDK bdev Controller", 00:12:00.749 "max_namespaces": 32, 00:12:00.749 "min_cntlid": 1, 00:12:00.749 "max_cntlid": 65519, 00:12:00.749 "namespaces": [ 00:12:00.749 { 00:12:00.749 "nsid": 1, 00:12:00.749 "bdev_name": "Malloc1", 00:12:00.749 "name": "Malloc1", 00:12:00.749 "nguid": "F0768B4BFB4C4DBD9649FFDE18163FD5", 00:12:00.749 "uuid": "f0768b4b-fb4c-4dbd-9649-ffde18163fd5" 00:12:00.749 }, 00:12:00.749 { 00:12:00.749 "nsid": 2, 00:12:00.749 "bdev_name": "Malloc3", 00:12:00.749 "name": "Malloc3", 00:12:00.749 "nguid": "A5B08E1F1CC945D2A1B254BF852055C7", 00:12:00.749 "uuid": "a5b08e1f-1cc9-45d2-a1b2-54bf852055c7" 00:12:00.749 } 00:12:00.749 ] 00:12:00.749 }, 00:12:00.749 { 00:12:00.749 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:00.749 "subtype": "NVMe", 00:12:00.749 "listen_addresses": [ 00:12:00.749 { 00:12:00.749 "trtype": "VFIOUSER", 00:12:00.749 "adrfam": "IPv4", 00:12:00.749 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:00.749 "trsvcid": "0" 00:12:00.749 } 00:12:00.749 ], 00:12:00.749 "allow_any_host": true, 00:12:00.749 "hosts": [], 00:12:00.749 "serial_number": "SPDK2", 00:12:00.749 "model_number": "SPDK bdev Controller", 00:12:00.749 "max_namespaces": 32, 00:12:00.749 "min_cntlid": 1, 00:12:00.749 "max_cntlid": 65519, 00:12:00.749 "namespaces": [ 00:12:00.749 { 00:12:00.749 "nsid": 1, 00:12:00.749 "bdev_name": "Malloc2", 00:12:00.749 "name": "Malloc2", 00:12:00.749 "nguid": "2C04BA2F2F1A4BA0858698CB955F07B8", 00:12:00.749 "uuid": "2c04ba2f-2f1a-4ba0-8586-98cb955f07b8" 00:12:00.749 }, 00:12:00.749 { 00:12:00.749 "nsid": 2, 00:12:00.749 "bdev_name": "Malloc4", 00:12:00.750 "name": "Malloc4", 00:12:00.750 "nguid": "FF4943AECA98479A87ABA56CDB44EDFA", 00:12:00.750 "uuid": "ff4943ae-ca98-479a-87ab-a56cdb44edfa" 00:12:00.750 } 00:12:00.750 ] 00:12:00.750 } 00:12:00.750 ] 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3996295 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3988447 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3988447 ']' 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3988447 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3988447 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3988447' 00:12:00.750 killing process with pid 3988447 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3988447 00:12:00.750 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3988447 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3996527 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3996527' 00:12:01.009 Process pid: 3996527 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3996527 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3996527 ']' 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.009 17:20:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:01.268 [2024-07-12 17:20:19.822388] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:01.268 [2024-07-12 17:20:19.823234] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:12:01.268 [2024-07-12 17:20:19.823273] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.268 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.268 [2024-07-12 17:20:19.878349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.268 [2024-07-12 17:20:19.945448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.268 [2024-07-12 17:20:19.945489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.268 [2024-07-12 17:20:19.945496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.268 [2024-07-12 17:20:19.945502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.268 [2024-07-12 17:20:19.945507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.268 [2024-07-12 17:20:19.945604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.268 [2024-07-12 17:20:19.945703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.268 [2024-07-12 17:20:19.945801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.268 [2024-07-12 17:20:19.945802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.268 [2024-07-12 17:20:20.025825] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:01.268 [2024-07-12 17:20:20.025866] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:01.268 [2024-07-12 17:20:20.026025] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:01.268 [2024-07-12 17:20:20.026369] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:01.268 [2024-07-12 17:20:20.026624] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:02.202 17:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.202 17:20:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:02.202 17:20:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:03.203 17:20:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:03.461 Malloc1 00:12:03.461 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:03.461 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:03.719 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:03.978 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:03.978 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:03.978 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:03.978 Malloc2 00:12:04.238 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:04.238 17:20:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:04.497 17:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3996527 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3996527 ']' 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3996527 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3996527 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3996527' 00:12:04.755 killing process with pid 3996527 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3996527 00:12:04.755 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3996527 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:05.014 00:12:05.014 real 0m52.041s 00:12:05.014 user 3m25.942s 00:12:05.014 sys 0m3.675s 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:05.014 ************************************ 00:12:05.014 END TEST nvmf_vfio_user 00:12:05.014 ************************************ 00:12:05.014 17:20:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:05.014 17:20:23 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:05.014 17:20:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.014 17:20:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.014 17:20:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:05.014 ************************************ 00:12:05.014 START TEST nvmf_vfio_user_nvme_compliance 00:12:05.014 ************************************ 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:05.014 * Looking for test storage... 00:12:05.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.014 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3997292 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3997292' 00:12:05.015 Process pid: 3997292 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3997292 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3997292 ']' 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.015 17:20:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:05.274 [2024-07-12 17:20:23.806426] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:12:05.274 [2024-07-12 17:20:23.806473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.274 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.274 [2024-07-12 17:20:23.861188] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:05.274 [2024-07-12 17:20:23.940473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.274 [2024-07-12 17:20:23.940509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.274 [2024-07-12 17:20:23.940516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.274 [2024-07-12 17:20:23.940522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.274 [2024-07-12 17:20:23.940527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.274 [2024-07-12 17:20:23.940571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.274 [2024-07-12 17:20:23.940668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.274 [2024-07-12 17:20:23.940671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.842 17:20:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.842 17:20:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:05.842 17:20:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:07.220 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 malloc0 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 17:20:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:07.221 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.221 00:12:07.221 00:12:07.221 CUnit - A unit testing framework for C - Version 2.1-3 00:12:07.221 http://cunit.sourceforge.net/ 00:12:07.221 00:12:07.221 00:12:07.221 Suite: nvme_compliance 00:12:07.221 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 17:20:25.831834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.221 [2024-07-12 17:20:25.833191] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:07.221 [2024-07-12 17:20:25.833207] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:07.221 [2024-07-12 17:20:25.833213] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:07.221 [2024-07-12 17:20:25.834856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.221 passed 00:12:07.221 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 17:20:25.912391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.221 [2024-07-12 17:20:25.915414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.221 passed 00:12:07.221 Test: admin_identify_ns ...[2024-07-12 17:20:25.994730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.481 [2024-07-12 17:20:26.058389] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:07.481 [2024-07-12 17:20:26.066387] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:07.481 [2024-07-12 17:20:26.087482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.481 passed 00:12:07.481 Test: admin_get_features_mandatory_features ...[2024-07-12 17:20:26.161684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.481 [2024-07-12 17:20:26.164704] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.481 passed 00:12:07.481 Test: admin_get_features_optional_features ...[2024-07-12 17:20:26.243208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.481 [2024-07-12 17:20:26.246232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.740 passed 00:12:07.740 Test: admin_set_features_number_of_queues ...[2024-07-12 17:20:26.324051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.740 [2024-07-12 17:20:26.429469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.740 passed 00:12:07.740 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 17:20:26.504643] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.740 [2024-07-12 17:20:26.507666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.999 passed 00:12:07.999 Test: admin_get_log_page_with_lpo ...[2024-07-12 17:20:26.585753] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.999 [2024-07-12 17:20:26.654393] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:08.000 [2024-07-12 17:20:26.667431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.000 passed 00:12:08.000 Test: fabric_property_get ...[2024-07-12 17:20:26.741366] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.000 [2024-07-12 17:20:26.742598] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:08.000 [2024-07-12 17:20:26.747402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.000 passed 00:12:08.259 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 17:20:26.822907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.259 [2024-07-12 17:20:26.824128] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:08.259 [2024-07-12 17:20:26.825923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.259 passed 00:12:08.259 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 17:20:26.903697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.259 [2024-07-12 17:20:26.987385] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:08.259 [2024-07-12 17:20:27.003388] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:08.259 [2024-07-12 17:20:27.008474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.259 passed 00:12:08.518 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 17:20:27.084593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.518 [2024-07-12 17:20:27.085825] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:08.518 [2024-07-12 17:20:27.087615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.518 passed 00:12:08.518 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 17:20:27.165399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.518 [2024-07-12 17:20:27.242386] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:08.518 [2024-07-12 17:20:27.266391] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:08.518 [2024-07-12 17:20:27.271471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.777 passed 00:12:08.777 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 17:20:27.347561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.777 [2024-07-12 17:20:27.348789] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:08.777 [2024-07-12 17:20:27.348811] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:08.777 [2024-07-12 17:20:27.350581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:08.777 passed 00:12:08.777 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 17:20:27.428333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:08.777 [2024-07-12 17:20:27.519386] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:08.777 [2024-07-12 17:20:27.527383] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:08.777 [2024-07-12 17:20:27.535389] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:08.777 [2024-07-12 17:20:27.543383] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:09.035 [2024-07-12 17:20:27.572465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:09.035 passed 00:12:09.035 Test: admin_create_io_sq_verify_pc ...[2024-07-12 17:20:27.650589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:09.035 [2024-07-12 17:20:27.669391] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:09.035 [2024-07-12 17:20:27.686761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:09.035 passed 00:12:09.035 Test: admin_create_io_qp_max_qps ...[2024-07-12 17:20:27.762258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:10.466 [2024-07-12 17:20:28.852388] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:10.724 [2024-07-12 17:20:29.257292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:10.724 passed 00:12:10.724 Test: admin_create_io_sq_shared_cq ...[2024-07-12 17:20:29.332277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:10.724 [2024-07-12 17:20:29.465387] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:10.724 [2024-07-12 17:20:29.502450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:10.982 passed 00:12:10.982 00:12:10.982 Run Summary: Type Total Ran Passed Failed Inactive 00:12:10.982 suites 1 1 n/a 0 0 00:12:10.982 tests 18 18 18 0 0 00:12:10.982 asserts 360 360 360 0 n/a 00:12:10.982 00:12:10.982 Elapsed time = 1.510 seconds 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3997292 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3997292 ']' 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3997292 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3997292 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3997292' 00:12:10.982 killing process with pid 3997292 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3997292 00:12:10.982 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3997292 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:11.240 00:12:11.240 real 0m6.143s 00:12:11.240 user 0m17.552s 00:12:11.240 sys 0m0.467s 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:11.240 ************************************ 00:12:11.240 END TEST nvmf_vfio_user_nvme_compliance 00:12:11.240 ************************************ 00:12:11.240 17:20:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:11.240 17:20:29 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:11.240 17:20:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:11.240 17:20:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.240 17:20:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:11.240 ************************************ 00:12:11.240 START TEST nvmf_vfio_user_fuzz 00:12:11.240 ************************************ 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:11.240 * Looking for test storage... 00:12:11.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:11.240 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3998277 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3998277' 00:12:11.241 Process pid: 3998277 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3998277 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3998277 ']' 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.241 17:20:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:12.177 17:20:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.177 17:20:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:12.177 17:20:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:13.113 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:13.114 malloc0 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:13.114 17:20:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:45.192 Fuzzing completed. Shutting down the fuzz application 00:12:45.192 00:12:45.192 Dumping successful admin opcodes: 00:12:45.192 8, 9, 10, 24, 00:12:45.192 Dumping successful io opcodes: 00:12:45.192 0, 00:12:45.192 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1129423, total successful commands: 4445, random_seed: 692613120 00:12:45.192 NS: 0x200003a1ef00 admin qp, Total commands completed: 280616, total successful commands: 2261, random_seed: 228458688 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3998277 ']' 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3998277' 00:12:45.192 killing process with pid 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3998277 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:45.192 00:12:45.192 real 0m32.726s 00:12:45.192 user 0m35.521s 00:12:45.192 sys 0m25.811s 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.192 17:21:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:45.192 ************************************ 00:12:45.192 END TEST nvmf_vfio_user_fuzz 00:12:45.192 ************************************ 00:12:45.192 17:21:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.192 17:21:02 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:45.192 17:21:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.192 17:21:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.192 17:21:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.192 ************************************ 00:12:45.192 START TEST nvmf_host_management 00:12:45.192 ************************************ 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:45.192 * Looking for test storage... 00:12:45.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.192 17:21:02 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.193 17:21:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.388 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.388 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.388 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.389 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.389 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.389 17:21:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:49.389 00:12:49.389 --- 10.0.0.2 ping statistics --- 00:12:49.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.389 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:12:49.389 00:12:49.389 --- 10.0.0.1 ping statistics --- 00:12:49.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.389 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4006799 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4006799 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4006799 ']' 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.389 17:21:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.649 [2024-07-12 17:21:08.186075] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:12:49.650 [2024-07-12 17:21:08.186118] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.650 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.650 [2024-07-12 17:21:08.243787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.650 [2024-07-12 17:21:08.325893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.650 [2024-07-12 17:21:08.325928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.650 [2024-07-12 17:21:08.325935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.650 [2024-07-12 17:21:08.325942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.650 [2024-07-12 17:21:08.325947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.650 [2024-07-12 17:21:08.326041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.650 [2024-07-12 17:21:08.326127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.650 [2024-07-12 17:21:08.326235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.650 [2024-07-12 17:21:08.326236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.260 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.260 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:50.260 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.260 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.260 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.519 [2024-07-12 17:21:09.048518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.519 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 Malloc0 00:12:50.520 [2024-07-12 17:21:09.108240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4007149 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4007149 /var/tmp/bdevperf.sock 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4007149 ']' 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:50.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:50.520 { 00:12:50.520 "params": { 00:12:50.520 "name": "Nvme$subsystem", 00:12:50.520 "trtype": "$TEST_TRANSPORT", 00:12:50.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:50.520 "adrfam": "ipv4", 00:12:50.520 "trsvcid": "$NVMF_PORT", 00:12:50.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:50.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:50.520 "hdgst": ${hdgst:-false}, 00:12:50.520 "ddgst": ${ddgst:-false} 00:12:50.520 }, 00:12:50.520 "method": "bdev_nvme_attach_controller" 00:12:50.520 } 00:12:50.520 EOF 00:12:50.520 )") 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:50.520 17:21:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:50.520 "params": { 00:12:50.520 "name": "Nvme0", 00:12:50.520 "trtype": "tcp", 00:12:50.520 "traddr": "10.0.0.2", 00:12:50.520 "adrfam": "ipv4", 00:12:50.520 "trsvcid": "4420", 00:12:50.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:50.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:50.520 "hdgst": false, 00:12:50.520 "ddgst": false 00:12:50.520 }, 00:12:50.520 "method": "bdev_nvme_attach_controller" 00:12:50.520 }' 00:12:50.520 [2024-07-12 17:21:09.203746] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:12:50.520 [2024-07-12 17:21:09.203793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007149 ] 00:12:50.520 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.520 [2024-07-12 17:21:09.258177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.779 [2024-07-12 17:21:09.331501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.038 Running I/O for 10 seconds... 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:51.299 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=723 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 723 -ge 100 ']' 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:51.561 [2024-07-12 17:21:10.099639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.099784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1725460 is same with the state(5) to be set 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.561 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:51.561 [2024-07-12 17:21:10.105888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.561 [2024-07-12 17:21:10.105921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.105930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.561 [2024-07-12 17:21:10.105938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.105945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.561 [2024-07-12 17:21:10.105952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.105960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.561 [2024-07-12 17:21:10.105967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.105974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ad980 is same with the state(5) to be set 00:12:51.561 [2024-07-12 17:21:10.106206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.561 [2024-07-12 17:21:10.106480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.561 [2024-07-12 17:21:10.106487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.106988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.106995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.562 [2024-07-12 17:21:10.107127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.562 [2024-07-12 17:21:10.107134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.563 [2024-07-12 17:21:10.107149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.563 [2024-07-12 17:21:10.107165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.563 [2024-07-12 17:21:10.107180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.563 [2024-07-12 17:21:10.107196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:51.563 [2024-07-12 17:21:10.107211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.107270] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17beb20 was disconnected and freed. reset controller. 00:12:51.563 [2024-07-12 17:21:10.108161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:51.563 task offset: 106496 on job bdev=Nvme0n1 fails 00:12:51.563 00:12:51.563 Latency(us) 00:12:51.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.563 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:51.563 Job: Nvme0n1 ended in about 0.45 seconds with error 00:12:51.563 Verification LBA range: start 0x0 length 0x400 00:12:51.563 Nvme0n1 : 0.45 1832.98 114.56 141.00 0.00 31615.37 1645.52 27696.08 00:12:51.563 =================================================================================================================== 00:12:51.563 Total : 1832.98 114.56 141.00 0.00 31615.37 1645.52 27696.08 00:12:51.563 [2024-07-12 17:21:10.109814] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:51.563 [2024-07-12 17:21:10.109829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ad980 (9): Bad file descriptor 00:12:51.563 [2024-07-12 17:21:10.111173] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:51.563 [2024-07-12 17:21:10.111251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:51.563 [2024-07-12 17:21:10.111275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.563 [2024-07-12 17:21:10.111291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:51.563 [2024-07-12 17:21:10.111299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:51.563 [2024-07-12 17:21:10.111305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:51.563 [2024-07-12 17:21:10.111312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13ad980 00:12:51.563 [2024-07-12 17:21:10.111330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ad980 (9): Bad file descriptor 00:12:51.563 [2024-07-12 17:21:10.111341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:51.563 [2024-07-12 17:21:10.111347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:51.563 [2024-07-12 17:21:10.111355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:51.563 [2024-07-12 17:21:10.111366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:51.563 17:21:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.563 17:21:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4007149 00:12:52.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4007149) - No such process 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:52.501 { 00:12:52.501 "params": { 00:12:52.501 "name": "Nvme$subsystem", 00:12:52.501 "trtype": "$TEST_TRANSPORT", 00:12:52.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:52.501 "adrfam": "ipv4", 00:12:52.501 "trsvcid": "$NVMF_PORT", 00:12:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:52.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:52.501 "hdgst": ${hdgst:-false}, 00:12:52.501 "ddgst": ${ddgst:-false} 00:12:52.501 }, 00:12:52.501 "method": "bdev_nvme_attach_controller" 00:12:52.501 } 00:12:52.501 EOF 00:12:52.501 )") 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:52.501 17:21:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:52.501 "params": { 00:12:52.501 "name": "Nvme0", 00:12:52.501 "trtype": "tcp", 00:12:52.501 "traddr": "10.0.0.2", 00:12:52.501 "adrfam": "ipv4", 00:12:52.501 "trsvcid": "4420", 00:12:52.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:52.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:52.501 "hdgst": false, 00:12:52.501 "ddgst": false 00:12:52.501 }, 00:12:52.501 "method": "bdev_nvme_attach_controller" 00:12:52.501 }' 00:12:52.501 [2024-07-12 17:21:11.168111] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:12:52.501 [2024-07-12 17:21:11.168159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007819 ] 00:12:52.501 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.501 [2024-07-12 17:21:11.220814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.760 [2024-07-12 17:21:11.293175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.019 Running I/O for 1 seconds... 00:12:53.957 00:12:53.957 Latency(us) 00:12:53.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.957 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:53.957 Verification LBA range: start 0x0 length 0x400 00:12:53.957 Nvme0n1 : 1.02 1937.66 121.10 0.00 0.00 32517.58 7094.98 27696.08 00:12:53.957 =================================================================================================================== 00:12:53.957 Total : 1937.66 121.10 0.00 0.00 32517.58 7094.98 27696.08 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:54.216 rmmod nvme_tcp 00:12:54.216 rmmod nvme_fabrics 00:12:54.216 rmmod nvme_keyring 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4006799 ']' 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4006799 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4006799 ']' 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4006799 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4006799 00:12:54.216 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:54.217 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:54.217 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4006799' 00:12:54.217 killing process with pid 4006799 00:12:54.217 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4006799 00:12:54.217 17:21:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4006799 00:12:54.475 [2024-07-12 17:21:13.126354] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.475 17:21:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.011 17:21:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.011 17:21:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:57.011 00:12:57.011 real 0m12.581s 00:12:57.011 user 0m23.447s 00:12:57.011 sys 0m5.095s 00:12:57.011 17:21:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.012 17:21:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 ************************************ 00:12:57.012 END TEST nvmf_host_management 00:12:57.012 ************************************ 00:12:57.012 17:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.012 17:21:15 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:57.012 17:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.012 17:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.012 17:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 ************************************ 00:12:57.012 START TEST nvmf_lvol 00:12:57.012 ************************************ 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:57.012 * Looking for test storage... 00:12:57.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.012 17:21:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.287 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:02.288 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:02.288 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:02.288 Found net devices under 0000:86:00.0: cvl_0_0 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:02.288 Found net devices under 0000:86:00.1: cvl_0_1 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:02.288 00:13:02.288 --- 10.0.0.2 ping statistics --- 00:13:02.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.288 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:13:02.288 00:13:02.288 --- 10.0.0.1 ping statistics --- 00:13:02.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.288 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4011484 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4011484 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4011484 ']' 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.288 17:21:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.288 [2024-07-12 17:21:20.718971] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:13:02.288 [2024-07-12 17:21:20.719014] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.288 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.288 [2024-07-12 17:21:20.775301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.288 [2024-07-12 17:21:20.854283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.288 [2024-07-12 17:21:20.854320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.288 [2024-07-12 17:21:20.854326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.288 [2024-07-12 17:21:20.854332] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.288 [2024-07-12 17:21:20.854337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.288 [2024-07-12 17:21:20.854371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.288 [2024-07-12 17:21:20.854469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.289 [2024-07-12 17:21:20.854471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.855 17:21:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:03.114 [2024-07-12 17:21:21.719510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.114 17:21:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.372 17:21:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:03.372 17:21:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:03.372 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:03.373 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:03.631 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:03.890 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ac7f6ded-ef8e-4401-8b2e-81c3139a64d2 00:13:03.890 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac7f6ded-ef8e-4401-8b2e-81c3139a64d2 lvol 20 00:13:04.148 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f64b88c9-e242-4f5c-84c6-c184e69c0282 00:13:04.148 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:04.148 17:21:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f64b88c9-e242-4f5c-84c6-c184e69c0282 00:13:04.406 17:21:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:04.664 [2024-07-12 17:21:23.214581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.664 17:21:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.664 17:21:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4011944 00:13:04.664 17:21:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:04.664 17:21:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:04.922 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.858 17:21:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f64b88c9-e242-4f5c-84c6-c184e69c0282 MY_SNAPSHOT 00:13:06.117 17:21:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=266d9cc4-7f9d-4694-8ed4-b7870e7cb1bb 00:13:06.117 17:21:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f64b88c9-e242-4f5c-84c6-c184e69c0282 30 00:13:06.375 17:21:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 266d9cc4-7f9d-4694-8ed4-b7870e7cb1bb MY_CLONE 00:13:06.375 17:21:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bceaf79a-1d24-45f3-b14d-9d46d09696fd 00:13:06.375 17:21:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bceaf79a-1d24-45f3-b14d-9d46d09696fd 00:13:06.940 17:21:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4011944 00:13:15.055 Initializing NVMe Controllers 00:13:15.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:15.055 Controller IO queue size 128, less than required. 00:13:15.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:15.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:15.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:15.055 Initialization complete. Launching workers. 00:13:15.055 ======================================================== 00:13:15.055 Latency(us) 00:13:15.055 Device Information : IOPS MiB/s Average min max 00:13:15.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12342.20 48.21 10373.22 1259.30 54529.90 00:13:15.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12313.60 48.10 10400.90 3600.34 54842.43 00:13:15.056 ======================================================== 00:13:15.056 Total : 24655.79 96.31 10387.05 1259.30 54842.43 00:13:15.056 00:13:15.056 17:21:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:15.313 17:21:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f64b88c9-e242-4f5c-84c6-c184e69c0282 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac7f6ded-ef8e-4401-8b2e-81c3139a64d2 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.572 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.572 rmmod nvme_tcp 00:13:15.572 rmmod nvme_fabrics 00:13:15.831 rmmod nvme_keyring 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4011484 ']' 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4011484 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4011484 ']' 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4011484 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4011484 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4011484' 00:13:15.831 killing process with pid 4011484 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4011484 00:13:15.831 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4011484 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.137 17:21:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.042 17:21:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.042 00:13:18.042 real 0m21.428s 00:13:18.042 user 1m3.721s 00:13:18.042 sys 0m6.763s 00:13:18.042 17:21:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.042 17:21:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:18.042 ************************************ 00:13:18.042 END TEST nvmf_lvol 00:13:18.042 ************************************ 00:13:18.042 17:21:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.042 17:21:36 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:18.042 17:21:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.042 17:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.042 17:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.042 ************************************ 00:13:18.042 START TEST nvmf_lvs_grow 00:13:18.042 ************************************ 00:13:18.042 17:21:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:18.301 * Looking for test storage... 00:13:18.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.301 17:21:36 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.302 17:21:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:23.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:23.577 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:23.577 Found net devices under 0000:86:00.0: cvl_0_0 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:23.577 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:23.578 Found net devices under 0000:86:00.1: cvl_0_1 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.578 17:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:23.578 00:13:23.578 --- 10.0.0.2 ping statistics --- 00:13:23.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.578 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:23.578 00:13:23.578 --- 10.0.0.1 ping statistics --- 00:13:23.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.578 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4017210 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4017210 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4017210 ']' 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.578 17:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:23.578 [2024-07-12 17:21:42.264714] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:13:23.578 [2024-07-12 17:21:42.264754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.578 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.578 [2024-07-12 17:21:42.320039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.837 [2024-07-12 17:21:42.399497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.837 [2024-07-12 17:21:42.399532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.837 [2024-07-12 17:21:42.399539] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.837 [2024-07-12 17:21:42.399545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.837 [2024-07-12 17:21:42.399550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.837 [2024-07-12 17:21:42.399568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.406 17:21:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:24.666 [2024-07-12 17:21:43.265848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.666 ************************************ 00:13:24.666 START TEST lvs_grow_clean 00:13:24.666 ************************************ 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:24.666 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:24.926 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:24.926 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:25.185 17:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 lvol 150 00:13:25.444 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3f865650-21e8-4fec-a364-7382fb19c951 00:13:25.444 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.444 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:25.444 [2024-07-12 17:21:44.215894] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:25.444 [2024-07-12 17:21:44.215942] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:25.444 true 00:13:25.704 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:25.704 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:25.704 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:25.704 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:25.963 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f865650-21e8-4fec-a364-7382fb19c951 00:13:26.221 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:26.221 [2024-07-12 17:21:44.921991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.222 17:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4017714 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4017714 /var/tmp/bdevperf.sock 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4017714 ']' 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.481 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:26.481 [2024-07-12 17:21:45.142324] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:13:26.481 [2024-07-12 17:21:45.142371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017714 ] 00:13:26.481 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.481 [2024-07-12 17:21:45.197697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.741 [2024-07-12 17:21:45.277883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.309 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.309 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:27.309 17:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:27.568 Nvme0n1 00:13:27.568 17:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:27.826 [ 00:13:27.826 { 00:13:27.827 "name": "Nvme0n1", 00:13:27.827 "aliases": [ 00:13:27.827 "3f865650-21e8-4fec-a364-7382fb19c951" 00:13:27.827 ], 00:13:27.827 "product_name": "NVMe disk", 00:13:27.827 "block_size": 4096, 00:13:27.827 "num_blocks": 38912, 00:13:27.827 "uuid": "3f865650-21e8-4fec-a364-7382fb19c951", 00:13:27.827 "assigned_rate_limits": { 00:13:27.827 "rw_ios_per_sec": 0, 00:13:27.827 "rw_mbytes_per_sec": 0, 00:13:27.827 "r_mbytes_per_sec": 0, 00:13:27.827 "w_mbytes_per_sec": 0 00:13:27.827 }, 00:13:27.827 "claimed": false, 00:13:27.827 "zoned": false, 00:13:27.827 "supported_io_types": { 00:13:27.827 "read": true, 00:13:27.827 "write": true, 00:13:27.827 "unmap": true, 00:13:27.827 "flush": true, 00:13:27.827 "reset": true, 00:13:27.827 "nvme_admin": true, 00:13:27.827 "nvme_io": true, 00:13:27.827 "nvme_io_md": false, 00:13:27.827 "write_zeroes": true, 00:13:27.827 "zcopy": false, 00:13:27.827 "get_zone_info": false, 00:13:27.827 "zone_management": false, 00:13:27.827 "zone_append": false, 00:13:27.827 "compare": true, 00:13:27.827 "compare_and_write": true, 00:13:27.827 "abort": true, 00:13:27.827 "seek_hole": false, 00:13:27.827 "seek_data": false, 00:13:27.827 "copy": true, 00:13:27.827 "nvme_iov_md": false 00:13:27.827 }, 00:13:27.827 "memory_domains": [ 00:13:27.827 { 00:13:27.827 "dma_device_id": "system", 00:13:27.827 "dma_device_type": 1 00:13:27.827 } 00:13:27.827 ], 00:13:27.827 "driver_specific": { 00:13:27.827 "nvme": [ 00:13:27.827 { 00:13:27.827 "trid": { 00:13:27.827 "trtype": "TCP", 00:13:27.827 "adrfam": "IPv4", 00:13:27.827 "traddr": "10.0.0.2", 00:13:27.827 "trsvcid": "4420", 00:13:27.827 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:27.827 }, 00:13:27.827 "ctrlr_data": { 00:13:27.827 "cntlid": 1, 00:13:27.827 "vendor_id": "0x8086", 00:13:27.827 "model_number": "SPDK bdev Controller", 00:13:27.827 "serial_number": "SPDK0", 00:13:27.827 "firmware_revision": "24.09", 00:13:27.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:27.827 "oacs": { 00:13:27.827 "security": 0, 00:13:27.827 "format": 0, 00:13:27.827 "firmware": 0, 00:13:27.827 "ns_manage": 0 00:13:27.827 }, 00:13:27.827 "multi_ctrlr": true, 00:13:27.827 "ana_reporting": false 00:13:27.827 }, 00:13:27.827 "vs": { 00:13:27.827 "nvme_version": "1.3" 00:13:27.827 }, 00:13:27.827 "ns_data": { 00:13:27.827 "id": 1, 00:13:27.827 "can_share": true 00:13:27.827 } 00:13:27.827 } 00:13:27.827 ], 00:13:27.827 "mp_policy": "active_passive" 00:13:27.827 } 00:13:27.827 } 00:13:27.827 ] 00:13:27.827 17:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4017947 00:13:27.827 17:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:27.827 17:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:27.827 Running I/O for 10 seconds... 00:13:29.204 Latency(us) 00:13:29.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.204 Nvme0n1 : 1.00 23064.00 90.09 0.00 0.00 0.00 0.00 0.00 00:13:29.204 =================================================================================================================== 00:13:29.204 Total : 23064.00 90.09 0.00 0.00 0.00 0.00 0.00 00:13:29.204 00:13:29.772 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:30.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.031 Nvme0n1 : 2.00 23195.50 90.61 0.00 0.00 0.00 0.00 0.00 00:13:30.031 =================================================================================================================== 00:13:30.031 Total : 23195.50 90.61 0.00 0.00 0.00 0.00 0.00 00:13:30.031 00:13:30.031 true 00:13:30.031 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:30.031 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:30.290 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:30.290 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:30.290 17:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4017947 00:13:30.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.857 Nvme0n1 : 3.00 23257.00 90.85 0.00 0.00 0.00 0.00 0.00 00:13:30.857 =================================================================================================================== 00:13:30.857 Total : 23257.00 90.85 0.00 0.00 0.00 0.00 0.00 00:13:30.857 00:13:32.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.234 Nvme0n1 : 4.00 23286.00 90.96 0.00 0.00 0.00 0.00 0.00 00:13:32.234 =================================================================================================================== 00:13:32.234 Total : 23286.00 90.96 0.00 0.00 0.00 0.00 0.00 00:13:32.234 00:13:33.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.170 Nvme0n1 : 5.00 23330.40 91.13 0.00 0.00 0.00 0.00 0.00 00:13:33.170 =================================================================================================================== 00:13:33.170 Total : 23330.40 91.13 0.00 0.00 0.00 0.00 0.00 00:13:33.170 00:13:34.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.106 Nvme0n1 : 6.00 23361.33 91.26 0.00 0.00 0.00 0.00 0.00 00:13:34.106 =================================================================================================================== 00:13:34.106 Total : 23361.33 91.26 0.00 0.00 0.00 0.00 0.00 00:13:34.106 00:13:35.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.038 Nvme0n1 : 7.00 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:13:35.038 =================================================================================================================== 00:13:35.038 Total : 23400.00 91.41 0.00 0.00 0.00 0.00 0.00 00:13:35.038 00:13:35.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.975 Nvme0n1 : 8.00 23428.25 91.52 0.00 0.00 0.00 0.00 0.00 00:13:35.975 =================================================================================================================== 00:13:35.975 Total : 23428.25 91.52 0.00 0.00 0.00 0.00 0.00 00:13:35.975 00:13:36.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.912 Nvme0n1 : 9.00 23443.56 91.58 0.00 0.00 0.00 0.00 0.00 00:13:36.912 =================================================================================================================== 00:13:36.912 Total : 23443.56 91.58 0.00 0.00 0.00 0.00 0.00 00:13:36.912 00:13:37.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.880 Nvme0n1 : 10.00 23452.70 91.61 0.00 0.00 0.00 0.00 0.00 00:13:37.880 =================================================================================================================== 00:13:37.880 Total : 23452.70 91.61 0.00 0.00 0.00 0.00 0.00 00:13:37.880 00:13:37.880 00:13:37.880 Latency(us) 00:13:37.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.880 Nvme0n1 : 10.01 23453.65 91.62 0.00 0.00 5454.32 1474.56 10713.71 00:13:37.880 =================================================================================================================== 00:13:37.880 Total : 23453.65 91.62 0.00 0.00 5454.32 1474.56 10713.71 00:13:37.880 0 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4017714 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4017714 ']' 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4017714 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.880 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4017714 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4017714' 00:13:38.139 killing process with pid 4017714 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4017714 00:13:38.139 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.139 00:13:38.139 Latency(us) 00:13:38.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.139 =================================================================================================================== 00:13:38.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4017714 00:13:38.139 17:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.398 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:38.657 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:38.657 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:38.657 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:38.657 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:38.657 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:38.916 [2024-07-12 17:21:57.531757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:38.916 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:38.916 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:38.917 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:39.176 request: 00:13:39.176 { 00:13:39.176 "uuid": "a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93", 00:13:39.176 "method": "bdev_lvol_get_lvstores", 00:13:39.176 "req_id": 1 00:13:39.176 } 00:13:39.176 Got JSON-RPC error response 00:13:39.176 response: 00:13:39.176 { 00:13:39.176 "code": -19, 00:13:39.176 "message": "No such device" 00:13:39.176 } 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:39.176 aio_bdev 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3f865650-21e8-4fec-a364-7382fb19c951 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=3f865650-21e8-4fec-a364-7382fb19c951 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.176 17:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:39.436 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3f865650-21e8-4fec-a364-7382fb19c951 -t 2000 00:13:39.712 [ 00:13:39.712 { 00:13:39.712 "name": "3f865650-21e8-4fec-a364-7382fb19c951", 00:13:39.712 "aliases": [ 00:13:39.712 "lvs/lvol" 00:13:39.712 ], 00:13:39.712 "product_name": "Logical Volume", 00:13:39.712 "block_size": 4096, 00:13:39.712 "num_blocks": 38912, 00:13:39.712 "uuid": "3f865650-21e8-4fec-a364-7382fb19c951", 00:13:39.712 "assigned_rate_limits": { 00:13:39.712 "rw_ios_per_sec": 0, 00:13:39.712 "rw_mbytes_per_sec": 0, 00:13:39.712 "r_mbytes_per_sec": 0, 00:13:39.712 "w_mbytes_per_sec": 0 00:13:39.712 }, 00:13:39.712 "claimed": false, 00:13:39.712 "zoned": false, 00:13:39.712 "supported_io_types": { 00:13:39.712 "read": true, 00:13:39.712 "write": true, 00:13:39.712 "unmap": true, 00:13:39.712 "flush": false, 00:13:39.712 "reset": true, 00:13:39.712 "nvme_admin": false, 00:13:39.712 "nvme_io": false, 00:13:39.712 "nvme_io_md": false, 00:13:39.712 "write_zeroes": true, 00:13:39.712 "zcopy": false, 00:13:39.712 "get_zone_info": false, 00:13:39.712 "zone_management": false, 00:13:39.712 "zone_append": false, 00:13:39.712 "compare": false, 00:13:39.712 "compare_and_write": false, 00:13:39.712 "abort": false, 00:13:39.712 "seek_hole": true, 00:13:39.712 "seek_data": true, 00:13:39.712 "copy": false, 00:13:39.712 "nvme_iov_md": false 00:13:39.712 }, 00:13:39.712 "driver_specific": { 00:13:39.712 "lvol": { 00:13:39.712 "lvol_store_uuid": "a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93", 00:13:39.712 "base_bdev": "aio_bdev", 00:13:39.712 "thin_provision": false, 00:13:39.712 "num_allocated_clusters": 38, 00:13:39.712 "snapshot": false, 00:13:39.712 "clone": false, 00:13:39.712 "esnap_clone": false 00:13:39.712 } 00:13:39.712 } 00:13:39.712 } 00:13:39.712 ] 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:39.712 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:39.972 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:39.972 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3f865650-21e8-4fec-a364-7382fb19c951 00:13:40.231 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4692e8e-ab2c-4a64-a3aa-bc9dc1963f93 00:13:40.231 17:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:40.490 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.490 00:13:40.490 real 0m15.832s 00:13:40.490 user 0m15.553s 00:13:40.490 sys 0m1.420s 00:13:40.490 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.490 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:40.491 ************************************ 00:13:40.491 END TEST lvs_grow_clean 00:13:40.491 ************************************ 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:40.491 ************************************ 00:13:40.491 START TEST lvs_grow_dirty 00:13:40.491 ************************************ 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:40.491 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:40.750 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:40.750 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:41.008 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa93878d-3061-4b55-8512-dd083b44e1d2 lvol 150 00:13:41.266 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:41.266 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:41.266 17:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:41.523 [2024-07-12 17:22:00.091094] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:41.524 [2024-07-12 17:22:00.091146] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:41.524 true 00:13:41.524 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:41.524 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:41.524 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:41.524 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:41.781 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:42.039 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:42.039 [2024-07-12 17:22:00.785145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.039 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4020311 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4020311 /var/tmp/bdevperf.sock 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4020311 ']' 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.296 17:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 [2024-07-12 17:22:01.008394] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:13:42.296 [2024-07-12 17:22:01.008442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020311 ] 00:13:42.296 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.296 [2024-07-12 17:22:01.062573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.554 [2024-07-12 17:22:01.143273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.121 17:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.121 17:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:43.121 17:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:43.379 Nvme0n1 00:13:43.379 17:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:43.639 [ 00:13:43.639 { 00:13:43.639 "name": "Nvme0n1", 00:13:43.639 "aliases": [ 00:13:43.639 "0c8b9a57-7571-4edc-9f1e-2f340909221b" 00:13:43.639 ], 00:13:43.639 "product_name": "NVMe disk", 00:13:43.639 "block_size": 4096, 00:13:43.639 "num_blocks": 38912, 00:13:43.639 "uuid": "0c8b9a57-7571-4edc-9f1e-2f340909221b", 00:13:43.639 "assigned_rate_limits": { 00:13:43.639 "rw_ios_per_sec": 0, 00:13:43.639 "rw_mbytes_per_sec": 0, 00:13:43.639 "r_mbytes_per_sec": 0, 00:13:43.639 "w_mbytes_per_sec": 0 00:13:43.639 }, 00:13:43.639 "claimed": false, 00:13:43.639 "zoned": false, 00:13:43.639 "supported_io_types": { 00:13:43.639 "read": true, 00:13:43.639 "write": true, 00:13:43.639 "unmap": true, 00:13:43.639 "flush": true, 00:13:43.639 "reset": true, 00:13:43.639 "nvme_admin": true, 00:13:43.639 "nvme_io": true, 00:13:43.639 "nvme_io_md": false, 00:13:43.639 "write_zeroes": true, 00:13:43.639 "zcopy": false, 00:13:43.639 "get_zone_info": false, 00:13:43.639 "zone_management": false, 00:13:43.639 "zone_append": false, 00:13:43.639 "compare": true, 00:13:43.639 "compare_and_write": true, 00:13:43.639 "abort": true, 00:13:43.639 "seek_hole": false, 00:13:43.639 "seek_data": false, 00:13:43.639 "copy": true, 00:13:43.639 "nvme_iov_md": false 00:13:43.639 }, 00:13:43.639 "memory_domains": [ 00:13:43.639 { 00:13:43.639 "dma_device_id": "system", 00:13:43.639 "dma_device_type": 1 00:13:43.639 } 00:13:43.639 ], 00:13:43.639 "driver_specific": { 00:13:43.639 "nvme": [ 00:13:43.639 { 00:13:43.639 "trid": { 00:13:43.639 "trtype": "TCP", 00:13:43.639 "adrfam": "IPv4", 00:13:43.639 "traddr": "10.0.0.2", 00:13:43.639 "trsvcid": "4420", 00:13:43.639 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:43.639 }, 00:13:43.639 "ctrlr_data": { 00:13:43.639 "cntlid": 1, 00:13:43.639 "vendor_id": "0x8086", 00:13:43.639 "model_number": "SPDK bdev Controller", 00:13:43.639 "serial_number": "SPDK0", 00:13:43.639 "firmware_revision": "24.09", 00:13:43.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:43.639 "oacs": { 00:13:43.639 "security": 0, 00:13:43.639 "format": 0, 00:13:43.639 "firmware": 0, 00:13:43.639 "ns_manage": 0 00:13:43.639 }, 00:13:43.639 "multi_ctrlr": true, 00:13:43.639 "ana_reporting": false 00:13:43.639 }, 00:13:43.639 "vs": { 00:13:43.639 "nvme_version": "1.3" 00:13:43.639 }, 00:13:43.639 "ns_data": { 00:13:43.639 "id": 1, 00:13:43.639 "can_share": true 00:13:43.639 } 00:13:43.639 } 00:13:43.639 ], 00:13:43.639 "mp_policy": "active_passive" 00:13:43.639 } 00:13:43.639 } 00:13:43.639 ] 00:13:43.639 17:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4020540 00:13:43.639 17:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:43.639 17:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:43.639 Running I/O for 10 seconds... 00:13:44.574 Latency(us) 00:13:44.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.575 Nvme0n1 : 1.00 22126.00 86.43 0.00 0.00 0.00 0.00 0.00 00:13:44.575 =================================================================================================================== 00:13:44.575 Total : 22126.00 86.43 0.00 0.00 0.00 0.00 0.00 00:13:44.575 00:13:45.510 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:45.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.768 Nvme0n1 : 2.00 22251.00 86.92 0.00 0.00 0.00 0.00 0.00 00:13:45.768 =================================================================================================================== 00:13:45.768 Total : 22251.00 86.92 0.00 0.00 0.00 0.00 0.00 00:13:45.768 00:13:45.768 true 00:13:45.768 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:45.768 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:46.026 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:46.026 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:46.026 17:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4020540 00:13:46.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:46.592 Nvme0n1 : 3.00 22316.67 87.17 0.00 0.00 0.00 0.00 0.00 00:13:46.592 =================================================================================================================== 00:13:46.592 Total : 22316.67 87.17 0.00 0.00 0.00 0.00 0.00 00:13:46.592 00:13:47.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.528 Nvme0n1 : 4.00 22379.50 87.42 0.00 0.00 0.00 0.00 0.00 00:13:47.528 =================================================================================================================== 00:13:47.528 Total : 22379.50 87.42 0.00 0.00 0.00 0.00 0.00 00:13:47.528 00:13:48.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.905 Nvme0n1 : 5.00 22417.20 87.57 0.00 0.00 0.00 0.00 0.00 00:13:48.905 =================================================================================================================== 00:13:48.905 Total : 22417.20 87.57 0.00 0.00 0.00 0.00 0.00 00:13:48.905 00:13:49.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.842 Nvme0n1 : 6.00 22451.67 87.70 0.00 0.00 0.00 0.00 0.00 00:13:49.842 =================================================================================================================== 00:13:49.842 Total : 22451.67 87.70 0.00 0.00 0.00 0.00 0.00 00:13:49.842 00:13:50.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.781 Nvme0n1 : 7.00 22483.14 87.82 0.00 0.00 0.00 0.00 0.00 00:13:50.781 =================================================================================================================== 00:13:50.781 Total : 22483.14 87.82 0.00 0.00 0.00 0.00 0.00 00:13:50.781 00:13:51.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.719 Nvme0n1 : 8.00 22489.75 87.85 0.00 0.00 0.00 0.00 0.00 00:13:51.719 =================================================================================================================== 00:13:51.719 Total : 22489.75 87.85 0.00 0.00 0.00 0.00 0.00 00:13:51.719 00:13:52.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.657 Nvme0n1 : 9.00 22507.33 87.92 0.00 0.00 0.00 0.00 0.00 00:13:52.657 =================================================================================================================== 00:13:52.657 Total : 22507.33 87.92 0.00 0.00 0.00 0.00 0.00 00:13:52.657 00:13:53.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.594 Nvme0n1 : 10.00 22522.20 87.98 0.00 0.00 0.00 0.00 0.00 00:13:53.594 =================================================================================================================== 00:13:53.594 Total : 22522.20 87.98 0.00 0.00 0.00 0.00 0.00 00:13:53.594 00:13:53.594 00:13:53.594 Latency(us) 00:13:53.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.594 Nvme0n1 : 10.01 22522.93 87.98 0.00 0.00 5678.91 4274.09 13050.21 00:13:53.594 =================================================================================================================== 00:13:53.594 Total : 22522.93 87.98 0.00 0.00 5678.91 4274.09 13050.21 00:13:53.594 0 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4020311 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4020311 ']' 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4020311 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.594 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020311 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020311' 00:13:53.853 killing process with pid 4020311 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4020311 00:13:53.853 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.853 00:13:53.853 Latency(us) 00:13:53.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.853 =================================================================================================================== 00:13:53.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4020311 00:13:53.853 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.112 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:54.370 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:54.370 17:22:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4017210 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4017210 00:13:54.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4017210 Killed "${NVMF_APP[@]}" "$@" 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4022384 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4022384 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4022384 ']' 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:54.370 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:54.628 [2024-07-12 17:22:13.174039] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:13:54.628 [2024-07-12 17:22:13.174086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.628 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.628 [2024-07-12 17:22:13.230544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.628 [2024-07-12 17:22:13.308648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.628 [2024-07-12 17:22:13.308680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.628 [2024-07-12 17:22:13.308687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.628 [2024-07-12 17:22:13.308693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.628 [2024-07-12 17:22:13.308697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.628 [2024-07-12 17:22:13.308736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.197 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.197 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:55.197 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.197 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:55.197 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:55.492 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.492 17:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:55.492 [2024-07-12 17:22:14.153607] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:55.492 [2024-07-12 17:22:14.153691] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:55.492 [2024-07-12 17:22:14.153717] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:55.492 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:55.772 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c8b9a57-7571-4edc-9f1e-2f340909221b -t 2000 00:13:55.772 [ 00:13:55.772 { 00:13:55.772 "name": "0c8b9a57-7571-4edc-9f1e-2f340909221b", 00:13:55.772 "aliases": [ 00:13:55.772 "lvs/lvol" 00:13:55.772 ], 00:13:55.772 "product_name": "Logical Volume", 00:13:55.773 "block_size": 4096, 00:13:55.773 "num_blocks": 38912, 00:13:55.773 "uuid": "0c8b9a57-7571-4edc-9f1e-2f340909221b", 00:13:55.773 "assigned_rate_limits": { 00:13:55.773 "rw_ios_per_sec": 0, 00:13:55.773 "rw_mbytes_per_sec": 0, 00:13:55.773 "r_mbytes_per_sec": 0, 00:13:55.773 "w_mbytes_per_sec": 0 00:13:55.773 }, 00:13:55.773 "claimed": false, 00:13:55.773 "zoned": false, 00:13:55.773 "supported_io_types": { 00:13:55.773 "read": true, 00:13:55.773 "write": true, 00:13:55.773 "unmap": true, 00:13:55.773 "flush": false, 00:13:55.773 "reset": true, 00:13:55.773 "nvme_admin": false, 00:13:55.773 "nvme_io": false, 00:13:55.773 "nvme_io_md": false, 00:13:55.773 "write_zeroes": true, 00:13:55.773 "zcopy": false, 00:13:55.773 "get_zone_info": false, 00:13:55.773 "zone_management": false, 00:13:55.773 "zone_append": false, 00:13:55.773 "compare": false, 00:13:55.773 "compare_and_write": false, 00:13:55.773 "abort": false, 00:13:55.773 "seek_hole": true, 00:13:55.773 "seek_data": true, 00:13:55.773 "copy": false, 00:13:55.773 "nvme_iov_md": false 00:13:55.773 }, 00:13:55.773 "driver_specific": { 00:13:55.773 "lvol": { 00:13:55.773 "lvol_store_uuid": "aa93878d-3061-4b55-8512-dd083b44e1d2", 00:13:55.773 "base_bdev": "aio_bdev", 00:13:55.773 "thin_provision": false, 00:13:55.773 "num_allocated_clusters": 38, 00:13:55.773 "snapshot": false, 00:13:55.773 "clone": false, 00:13:55.773 "esnap_clone": false 00:13:55.773 } 00:13:55.773 } 00:13:55.773 } 00:13:55.773 ] 00:13:55.773 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:55.773 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:55.773 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:56.031 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:56.032 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:56.032 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:56.290 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:56.290 17:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:56.290 [2024-07-12 17:22:14.993963] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.290 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:56.291 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.291 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:56.291 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.291 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:56.291 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:56.550 request: 00:13:56.550 { 00:13:56.550 "uuid": "aa93878d-3061-4b55-8512-dd083b44e1d2", 00:13:56.551 "method": "bdev_lvol_get_lvstores", 00:13:56.551 "req_id": 1 00:13:56.551 } 00:13:56.551 Got JSON-RPC error response 00:13:56.551 response: 00:13:56.551 { 00:13:56.551 "code": -19, 00:13:56.551 "message": "No such device" 00:13:56.551 } 00:13:56.551 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:56.551 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:56.551 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:56.551 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:56.551 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:56.810 aio_bdev 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:56.810 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c8b9a57-7571-4edc-9f1e-2f340909221b -t 2000 00:13:57.070 [ 00:13:57.070 { 00:13:57.070 "name": "0c8b9a57-7571-4edc-9f1e-2f340909221b", 00:13:57.070 "aliases": [ 00:13:57.070 "lvs/lvol" 00:13:57.070 ], 00:13:57.070 "product_name": "Logical Volume", 00:13:57.070 "block_size": 4096, 00:13:57.070 "num_blocks": 38912, 00:13:57.070 "uuid": "0c8b9a57-7571-4edc-9f1e-2f340909221b", 00:13:57.070 "assigned_rate_limits": { 00:13:57.070 "rw_ios_per_sec": 0, 00:13:57.070 "rw_mbytes_per_sec": 0, 00:13:57.070 "r_mbytes_per_sec": 0, 00:13:57.070 "w_mbytes_per_sec": 0 00:13:57.070 }, 00:13:57.070 "claimed": false, 00:13:57.070 "zoned": false, 00:13:57.070 "supported_io_types": { 00:13:57.070 "read": true, 00:13:57.070 "write": true, 00:13:57.070 "unmap": true, 00:13:57.070 "flush": false, 00:13:57.070 "reset": true, 00:13:57.070 "nvme_admin": false, 00:13:57.070 "nvme_io": false, 00:13:57.070 "nvme_io_md": false, 00:13:57.070 "write_zeroes": true, 00:13:57.070 "zcopy": false, 00:13:57.070 "get_zone_info": false, 00:13:57.070 "zone_management": false, 00:13:57.070 "zone_append": false, 00:13:57.070 "compare": false, 00:13:57.070 "compare_and_write": false, 00:13:57.070 "abort": false, 00:13:57.070 "seek_hole": true, 00:13:57.070 "seek_data": true, 00:13:57.070 "copy": false, 00:13:57.070 "nvme_iov_md": false 00:13:57.070 }, 00:13:57.070 "driver_specific": { 00:13:57.070 "lvol": { 00:13:57.070 "lvol_store_uuid": "aa93878d-3061-4b55-8512-dd083b44e1d2", 00:13:57.070 "base_bdev": "aio_bdev", 00:13:57.070 "thin_provision": false, 00:13:57.070 "num_allocated_clusters": 38, 00:13:57.070 "snapshot": false, 00:13:57.070 "clone": false, 00:13:57.070 "esnap_clone": false 00:13:57.070 } 00:13:57.070 } 00:13:57.070 } 00:13:57.070 ] 00:13:57.070 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:57.070 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:57.070 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:57.329 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:57.329 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:57.329 17:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:57.329 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:57.329 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c8b9a57-7571-4edc-9f1e-2f340909221b 00:13:57.588 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa93878d-3061-4b55-8512-dd083b44e1d2 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:57.847 00:13:57.847 real 0m17.383s 00:13:57.847 user 0m44.655s 00:13:57.847 sys 0m3.941s 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:57.847 ************************************ 00:13:57.847 END TEST lvs_grow_dirty 00:13:57.847 ************************************ 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:57.847 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:58.107 nvmf_trace.0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.107 rmmod nvme_tcp 00:13:58.107 rmmod nvme_fabrics 00:13:58.107 rmmod nvme_keyring 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4022384 ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4022384 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4022384 ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4022384 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4022384 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4022384' 00:13:58.107 killing process with pid 4022384 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4022384 00:13:58.107 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4022384 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.367 17:22:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.273 17:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.273 00:14:00.273 real 0m42.227s 00:14:00.273 user 1m5.946s 00:14:00.273 sys 0m9.721s 00:14:00.273 17:22:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.273 17:22:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:00.273 ************************************ 00:14:00.273 END TEST nvmf_lvs_grow 00:14:00.273 ************************************ 00:14:00.273 17:22:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.273 17:22:19 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:00.273 17:22:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.273 17:22:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.273 17:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.532 ************************************ 00:14:00.532 START TEST nvmf_bdev_io_wait 00:14:00.532 ************************************ 00:14:00.532 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:00.532 * Looking for test storage... 00:14:00.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.532 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.532 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.533 17:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.806 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:05.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:05.807 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:05.807 Found net devices under 0000:86:00.0: cvl_0_0 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:05.807 Found net devices under 0000:86:00.1: cvl_0_1 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:05.807 00:14:05.807 --- 10.0.0.2 ping statistics --- 00:14:05.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.807 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:14:05.807 00:14:05.807 --- 10.0.0.1 ping statistics --- 00:14:05.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.807 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4026424 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4026424 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4026424 ']' 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.807 17:22:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:05.807 [2024-07-12 17:22:24.558898] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:05.807 [2024-07-12 17:22:24.558940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.807 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.066 [2024-07-12 17:22:24.616740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.066 [2024-07-12 17:22:24.699584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.066 [2024-07-12 17:22:24.699619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.066 [2024-07-12 17:22:24.699628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.066 [2024-07-12 17:22:24.699635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.066 [2024-07-12 17:22:24.699640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.066 [2024-07-12 17:22:24.699679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.066 [2024-07-12 17:22:24.699777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.066 [2024-07-12 17:22:24.699861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.066 [2024-07-12 17:22:24.699863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.633 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.633 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:06.633 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.633 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.633 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.634 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.634 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:06.634 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.634 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.894 [2024-07-12 17:22:25.477840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.894 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.895 Malloc0 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.895 [2024-07-12 17:22:25.544786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4026676 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4026678 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.895 { 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme$subsystem", 00:14:06.895 "trtype": "$TEST_TRANSPORT", 00:14:06.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "$NVMF_PORT", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.895 "hdgst": ${hdgst:-false}, 00:14:06.895 "ddgst": ${ddgst:-false} 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 } 00:14:06.895 EOF 00:14:06.895 )") 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4026680 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.895 { 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme$subsystem", 00:14:06.895 "trtype": "$TEST_TRANSPORT", 00:14:06.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "$NVMF_PORT", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.895 "hdgst": ${hdgst:-false}, 00:14:06.895 "ddgst": ${ddgst:-false} 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 } 00:14:06.895 EOF 00:14:06.895 )") 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4026683 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.895 { 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme$subsystem", 00:14:06.895 "trtype": "$TEST_TRANSPORT", 00:14:06.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "$NVMF_PORT", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.895 "hdgst": ${hdgst:-false}, 00:14:06.895 "ddgst": ${ddgst:-false} 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 } 00:14:06.895 EOF 00:14:06.895 )") 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.895 { 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme$subsystem", 00:14:06.895 "trtype": "$TEST_TRANSPORT", 00:14:06.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "$NVMF_PORT", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.895 "hdgst": ${hdgst:-false}, 00:14:06.895 "ddgst": ${ddgst:-false} 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 } 00:14:06.895 EOF 00:14:06.895 )") 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4026676 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme1", 00:14:06.895 "trtype": "tcp", 00:14:06.895 "traddr": "10.0.0.2", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "4420", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.895 "hdgst": false, 00:14:06.895 "ddgst": false 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 }' 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme1", 00:14:06.895 "trtype": "tcp", 00:14:06.895 "traddr": "10.0.0.2", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "4420", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.895 "hdgst": false, 00:14:06.895 "ddgst": false 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 }' 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme1", 00:14:06.895 "trtype": "tcp", 00:14:06.895 "traddr": "10.0.0.2", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "4420", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.895 "hdgst": false, 00:14:06.895 "ddgst": false 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 }' 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.895 17:22:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.895 "params": { 00:14:06.895 "name": "Nvme1", 00:14:06.895 "trtype": "tcp", 00:14:06.895 "traddr": "10.0.0.2", 00:14:06.895 "adrfam": "ipv4", 00:14:06.895 "trsvcid": "4420", 00:14:06.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.895 "hdgst": false, 00:14:06.895 "ddgst": false 00:14:06.895 }, 00:14:06.895 "method": "bdev_nvme_attach_controller" 00:14:06.895 }' 00:14:06.895 [2024-07-12 17:22:25.590802] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:06.895 [2024-07-12 17:22:25.590846] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:06.895 [2024-07-12 17:22:25.595923] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:06.895 [2024-07-12 17:22:25.595930] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:06.895 [2024-07-12 17:22:25.595930] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:06.896 [2024-07-12 17:22:25.595972] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 17:22:25.595973] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 17:22:25.595973] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:06.896 --proc-type=auto ] 00:14:06.896 --proc-type=auto ] 00:14:06.896 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.155 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.155 [2024-07-12 17:22:25.739594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.155 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.155 [2024-07-12 17:22:25.808472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.155 [2024-07-12 17:22:25.839300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.155 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.155 [2024-07-12 17:22:25.917462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:07.414 [2024-07-12 17:22:25.935257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.414 [2024-07-12 17:22:26.013042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:07.414 [2024-07-12 17:22:26.034001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.414 [2024-07-12 17:22:26.126633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:07.414 Running I/O for 1 seconds... 00:14:07.672 Running I/O for 1 seconds... 00:14:07.672 Running I/O for 1 seconds... 00:14:07.672 Running I/O for 1 seconds... 00:14:08.609 00:14:08.609 Latency(us) 00:14:08.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.609 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:08.609 Nvme1n1 : 1.00 14231.07 55.59 0.00 0.00 8968.61 4957.94 16982.37 00:14:08.609 =================================================================================================================== 00:14:08.609 Total : 14231.07 55.59 0.00 0.00 8968.61 4957.94 16982.37 00:14:08.609 00:14:08.609 Latency(us) 00:14:08.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.609 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:08.609 Nvme1n1 : 1.01 6750.51 26.37 0.00 0.00 18808.90 9858.89 29063.79 00:14:08.609 =================================================================================================================== 00:14:08.609 Total : 6750.51 26.37 0.00 0.00 18808.90 9858.89 29063.79 00:14:08.609 00:14:08.609 Latency(us) 00:14:08.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.609 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:08.609 Nvme1n1 : 1.00 244843.52 956.42 0.00 0.00 520.37 210.14 648.24 00:14:08.609 =================================================================================================================== 00:14:08.609 Total : 244843.52 956.42 0.00 0.00 520.37 210.14 648.24 00:14:08.609 00:14:08.609 Latency(us) 00:14:08.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.609 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:08.609 Nvme1n1 : 1.01 7051.78 27.55 0.00 0.00 18091.40 6126.19 44678.46 00:14:08.609 =================================================================================================================== 00:14:08.609 Total : 7051.78 27.55 0.00 0.00 18091.40 6126.19 44678.46 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4026678 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4026680 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4026683 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.869 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.869 rmmod nvme_tcp 00:14:08.869 rmmod nvme_fabrics 00:14:08.869 rmmod nvme_keyring 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4026424 ']' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4026424 ']' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026424' 00:14:09.129 killing process with pid 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4026424 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.129 17:22:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.665 17:22:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.665 00:14:11.665 real 0m10.886s 00:14:11.665 user 0m19.862s 00:14:11.665 sys 0m5.616s 00:14:11.665 17:22:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.665 17:22:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.665 ************************************ 00:14:11.665 END TEST nvmf_bdev_io_wait 00:14:11.665 ************************************ 00:14:11.665 17:22:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.665 17:22:29 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:11.665 17:22:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.665 17:22:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.665 17:22:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.665 ************************************ 00:14:11.665 START TEST nvmf_queue_depth 00:14:11.665 ************************************ 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:11.665 * Looking for test storage... 00:14:11.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:11.665 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.666 17:22:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:16.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:16.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:16.948 Found net devices under 0000:86:00.0: cvl_0_0 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:16.948 Found net devices under 0000:86:00.1: cvl_0_1 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:14:16.948 00:14:16.948 --- 10.0.0.2 ping statistics --- 00:14:16.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.948 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:16.948 00:14:16.948 --- 10.0.0.1 ping statistics --- 00:14:16.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.948 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4030458 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4030458 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4030458 ']' 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.948 17:22:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:16.948 [2024-07-12 17:22:35.350694] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:16.948 [2024-07-12 17:22:35.350736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.948 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.948 [2024-07-12 17:22:35.405501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.948 [2024-07-12 17:22:35.484560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.948 [2024-07-12 17:22:35.484594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.948 [2024-07-12 17:22:35.484602] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.948 [2024-07-12 17:22:35.484608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.948 [2024-07-12 17:22:35.484613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.948 [2024-07-12 17:22:35.484637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 [2024-07-12 17:22:36.187610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 Malloc0 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 [2024-07-12 17:22:36.239625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4030549 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4030549 /var/tmp/bdevperf.sock 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4030549 ']' 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.552 17:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:17.552 [2024-07-12 17:22:36.275199] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:17.552 [2024-07-12 17:22:36.275240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030549 ] 00:14:17.552 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.552 [2024-07-12 17:22:36.330510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.811 [2024-07-12 17:22:36.404645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.377 17:22:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.377 17:22:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:18.377 17:22:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:18.377 17:22:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.377 17:22:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:18.636 NVMe0n1 00:14:18.636 17:22:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.636 17:22:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:18.636 Running I/O for 10 seconds... 00:14:30.896 00:14:30.896 Latency(us) 00:14:30.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.896 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:30.896 Verification LBA range: start 0x0 length 0x4000 00:14:30.896 NVMe0n1 : 10.07 12370.87 48.32 0.00 0.00 82506.71 19375.86 55392.17 00:14:30.896 =================================================================================================================== 00:14:30.896 Total : 12370.87 48.32 0.00 0.00 82506.71 19375.86 55392.17 00:14:30.896 0 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4030549 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4030549 ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4030549 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4030549 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4030549' 00:14:30.896 killing process with pid 4030549 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4030549 00:14:30.896 Received shutdown signal, test time was about 10.000000 seconds 00:14:30.896 00:14:30.896 Latency(us) 00:14:30.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.896 =================================================================================================================== 00:14:30.896 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4030549 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.896 rmmod nvme_tcp 00:14:30.896 rmmod nvme_fabrics 00:14:30.896 rmmod nvme_keyring 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4030458 ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4030458 ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4030458' 00:14:30.896 killing process with pid 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4030458 00:14:30.896 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.897 17:22:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.483 17:22:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.483 00:14:31.483 real 0m20.015s 00:14:31.483 user 0m24.756s 00:14:31.483 sys 0m5.416s 00:14:31.483 17:22:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.483 17:22:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:31.483 ************************************ 00:14:31.483 END TEST nvmf_queue_depth 00:14:31.483 ************************************ 00:14:31.483 17:22:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.483 17:22:50 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:31.483 17:22:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.483 17:22:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.483 17:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.483 ************************************ 00:14:31.483 START TEST nvmf_target_multipath 00:14:31.483 ************************************ 00:14:31.483 17:22:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:31.483 * Looking for test storage... 00:14:31.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.483 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.483 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.484 17:22:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.764 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.764 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.764 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.764 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.765 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:36.765 00:14:36.765 --- 10.0.0.2 ping statistics --- 00:14:36.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.765 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:36.765 00:14:36.765 --- 10.0.0.1 ping statistics --- 00:14:36.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.765 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:36.765 only one NIC for nvmf test 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.765 rmmod nvme_tcp 00:14:36.765 rmmod nvme_fabrics 00:14:36.765 rmmod nvme_keyring 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.765 17:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.303 00:14:39.303 real 0m7.439s 00:14:39.303 user 0m1.399s 00:14:39.303 sys 0m3.989s 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.303 17:22:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:39.303 ************************************ 00:14:39.303 END TEST nvmf_target_multipath 00:14:39.303 ************************************ 00:14:39.303 17:22:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:39.303 17:22:57 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:39.303 17:22:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.303 17:22:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.303 17:22:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.303 ************************************ 00:14:39.303 START TEST nvmf_zcopy 00:14:39.303 ************************************ 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:39.303 * Looking for test storage... 00:14:39.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.303 17:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.304 17:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.304 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.304 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.304 17:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.304 17:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:44.581 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:44.581 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:44.581 Found net devices under 0000:86:00.0: cvl_0_0 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:44.581 Found net devices under 0000:86:00.1: cvl_0_1 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:44.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:14:44.581 00:14:44.581 --- 10.0.0.2 ping statistics --- 00:14:44.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.581 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:14:44.581 00:14:44.581 --- 10.0.0.1 ping statistics --- 00:14:44.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.581 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4039135 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4039135 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4039135 ']' 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.581 17:23:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:44.581 [2024-07-12 17:23:02.951305] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:44.581 [2024-07-12 17:23:02.951346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.581 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.581 [2024-07-12 17:23:03.007240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.582 [2024-07-12 17:23:03.090614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.582 [2024-07-12 17:23:03.090645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.582 [2024-07-12 17:23:03.090652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.582 [2024-07-12 17:23:03.090659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.582 [2024-07-12 17:23:03.090664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.582 [2024-07-12 17:23:03.090687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 [2024-07-12 17:23:03.804967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 [2024-07-12 17:23:03.825083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 malloc0 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:45.150 { 00:14:45.150 "params": { 00:14:45.150 "name": "Nvme$subsystem", 00:14:45.150 "trtype": "$TEST_TRANSPORT", 00:14:45.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.150 "adrfam": "ipv4", 00:14:45.150 "trsvcid": "$NVMF_PORT", 00:14:45.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.150 "hdgst": ${hdgst:-false}, 00:14:45.150 "ddgst": ${ddgst:-false} 00:14:45.150 }, 00:14:45.150 "method": "bdev_nvme_attach_controller" 00:14:45.150 } 00:14:45.150 EOF 00:14:45.150 )") 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:45.150 17:23:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:45.150 "params": { 00:14:45.150 "name": "Nvme1", 00:14:45.150 "trtype": "tcp", 00:14:45.150 "traddr": "10.0.0.2", 00:14:45.150 "adrfam": "ipv4", 00:14:45.150 "trsvcid": "4420", 00:14:45.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.150 "hdgst": false, 00:14:45.150 "ddgst": false 00:14:45.150 }, 00:14:45.150 "method": "bdev_nvme_attach_controller" 00:14:45.150 }' 00:14:45.150 [2024-07-12 17:23:03.889206] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:45.150 [2024-07-12 17:23:03.889249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039368 ] 00:14:45.150 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.409 [2024-07-12 17:23:03.938326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.409 [2024-07-12 17:23:04.011944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.669 Running I/O for 10 seconds... 00:14:55.647 00:14:55.647 Latency(us) 00:14:55.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.648 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:55.648 Verification LBA range: start 0x0 length 0x1000 00:14:55.648 Nvme1n1 : 10.01 8674.97 67.77 0.00 0.00 14712.71 2151.29 24276.81 00:14:55.648 =================================================================================================================== 00:14:55.648 Total : 8674.97 67.77 0.00 0.00 14712.71 2151.29 24276.81 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4041187 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:55.907 { 00:14:55.907 "params": { 00:14:55.907 "name": "Nvme$subsystem", 00:14:55.907 "trtype": "$TEST_TRANSPORT", 00:14:55.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:55.907 "adrfam": "ipv4", 00:14:55.907 "trsvcid": "$NVMF_PORT", 00:14:55.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:55.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:55.907 "hdgst": ${hdgst:-false}, 00:14:55.907 "ddgst": ${ddgst:-false} 00:14:55.907 }, 00:14:55.907 "method": "bdev_nvme_attach_controller" 00:14:55.907 } 00:14:55.907 EOF 00:14:55.907 )") 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:55.907 [2024-07-12 17:23:14.522423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.522452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:55.907 17:23:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:55.907 "params": { 00:14:55.907 "name": "Nvme1", 00:14:55.907 "trtype": "tcp", 00:14:55.907 "traddr": "10.0.0.2", 00:14:55.907 "adrfam": "ipv4", 00:14:55.907 "trsvcid": "4420", 00:14:55.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.907 "hdgst": false, 00:14:55.907 "ddgst": false 00:14:55.907 }, 00:14:55.907 "method": "bdev_nvme_attach_controller" 00:14:55.907 }' 00:14:55.907 [2024-07-12 17:23:14.534425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.534437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.546450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.546460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.556493] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:14:55.907 [2024-07-12 17:23:14.556534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041187 ] 00:14:55.907 [2024-07-12 17:23:14.558483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.558494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.570515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.570524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.907 [2024-07-12 17:23:14.582545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.582554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.594577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.594586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.606610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.606619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.609828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.907 [2024-07-12 17:23:14.618645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.618656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.630673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.630683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.642709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.642718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.654741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.654762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.666773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.666782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.678806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.907 [2024-07-12 17:23:14.678814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.907 [2024-07-12 17:23:14.684729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.167 [2024-07-12 17:23:14.690841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.690862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.702880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.702899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.714906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.714918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.726941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.726952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.738971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.738981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.751002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.751012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.763035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.763044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.775080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.775098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.787107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.787122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.799139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.799152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.811171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.811184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.823203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.823216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.835237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.835254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 Running I/O for 5 seconds... 00:14:56.167 [2024-07-12 17:23:14.847268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.847277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.859672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.859691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.873315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.873334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.882427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.882446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.897113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.897132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.907565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.907583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.916543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.916566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.167 [2024-07-12 17:23:14.931522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.167 [2024-07-12 17:23:14.931540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:14.946913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:14.946933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:14.961406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:14.961425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:14.970270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:14.970288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:14.984461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:14.984479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:14.999119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:14.999137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.015065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.015088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.029004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.029022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.043289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.043308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.054768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.054786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.063675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.063694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.072996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.073014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.087753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.087772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.096749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.096767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.105518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.105536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.120170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.120187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.131238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.131255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.145890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.145907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.156753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.156771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.165421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.165439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.174046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.174064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.183327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.183345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.427 [2024-07-12 17:23:15.197877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.427 [2024-07-12 17:23:15.197895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.212000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.212019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.222889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.222907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.232206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.232227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.241515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.241533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.251143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.251161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.260347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.260365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.275494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.275512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.285947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.285965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.294702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.294719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.309531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.309549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.323393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.323412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.333831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.333848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.342412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.342431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.351808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.351825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.366393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.366411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.380410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.380428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.390868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.390885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.405080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.405097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.418842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.418861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.433072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.433093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.442033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.442051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.450736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.450757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.687 [2024-07-12 17:23:15.459929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.687 [2024-07-12 17:23:15.459947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.474726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.474746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.490559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.490579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.504688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.504706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.513725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.513744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.528080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.528098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.541878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.541898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.556159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.556178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.565216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.565236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.580272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.580292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.591559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.946 [2024-07-12 17:23:15.591580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.946 [2024-07-12 17:23:15.601249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.601267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.615642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.615661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.624537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.624556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.633444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.633463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.648386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.648404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.659002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.659021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.674189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.674208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.689285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.689308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.698028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.698046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.947 [2024-07-12 17:23:15.712409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.947 [2024-07-12 17:23:15.712427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.726096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.726116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.739880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.739899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.748884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.748902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.762998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.763017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.776847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.776866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.785634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.785652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.800040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.800058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.813717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.813737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.822722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.822740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.837494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.837513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.848349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.848368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.863127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.863147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.874045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.874063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.888544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.888568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.897538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.897556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.911933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.911951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.925581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.925600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.939595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.939614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.948317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.948336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.962811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.962829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.206 [2024-07-12 17:23:15.971869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.206 [2024-07-12 17:23:15.971887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:15.986778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:15.986796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.002759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.002778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.012453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.012472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.021418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.021437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.030068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.030086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.044576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.044594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.058679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.058699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.069187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.069206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.083718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.083736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.091528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.091547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.105327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.105345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.465 [2024-07-12 17:23:16.119411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.465 [2024-07-12 17:23:16.119429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.128280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.128298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.137422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.137439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.146806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.146824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.161322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.161340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.175570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.175588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.190868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.190886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.204812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.204830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.213834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.213852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.228337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.228355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.466 [2024-07-12 17:23:16.241752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.466 [2024-07-12 17:23:16.241770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.255700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.255719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.264732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.264750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.273622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.273651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.288185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.288203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.296894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.296913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.306432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.306450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.315136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.315154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.324479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.324497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.339260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.339278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.350239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.350256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.359030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.359048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.367758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.367776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.377061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.377078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.386567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.386585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.401235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.401252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.415086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.415105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.429066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.429084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.438170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.438191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.452902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.452920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.460481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.460499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.469337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.469354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.478189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.478207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.488140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.488158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.726 [2024-07-12 17:23:16.502871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.726 [2024-07-12 17:23:16.502889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.516801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.516819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.525828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.525847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.534715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.534734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.543537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.543556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.558217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.558235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.567308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.567325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.581868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.581887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.596202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.596220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.605072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.605089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.619821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.619839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.630744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.630763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.639599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.639617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.648310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.648327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.657005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.657023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.671806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.671824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.682244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.682262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.696456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.696474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.710047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.710065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.718991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.719008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.733653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.733672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.744476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.744494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.753717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.753735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.986 [2024-07-12 17:23:16.762426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.986 [2024-07-12 17:23:16.762444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.771887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.771905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.786738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.786759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.797677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.797695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.807190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.807208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.816044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.816061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.825192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.825210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.840005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.840024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.849150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.849168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.858178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.858196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.868021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.868039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.876749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.876767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.891672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.891690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.900625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.900643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.909459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.909477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.918233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.918251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.927575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.285 [2024-07-12 17:23:16.927593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.285 [2024-07-12 17:23:16.941946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:16.941965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:16.955581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:16.955601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:16.969515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:16.969533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:16.982864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:16.982883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:16.997116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:16.997139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:17.011033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:17.011052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:17.020369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:17.020395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:17.035330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:17.035354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:17.050601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:17.050619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.286 [2024-07-12 17:23:17.059509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.286 [2024-07-12 17:23:17.059527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.074025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.074044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.087846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.087864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.101453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.101472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.110382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.110401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.119915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.119933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.134486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.134505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.145565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.145583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.154352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.154369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.163673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.163690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.172782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.172801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.187386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.187405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.196448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.196467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.210954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.210973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.224550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.224573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.233464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.233484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.247992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.248011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.261927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.261947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.275976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.275995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.289896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.289915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.304084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.304103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.318285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.318304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.550 [2024-07-12 17:23:17.327261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.550 [2024-07-12 17:23:17.327279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.341807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.341826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.350848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.350866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.360233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.360252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.369624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.369642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.378881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.378899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.388312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.388330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.397719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.397737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.412355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.412374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.426138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.426156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.440126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.440144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.454339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.454363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.468208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.468226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.482581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.482611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.493315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.493333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.502596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.502615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.517445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.517463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.533362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.533387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.547927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.547944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.563520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.563539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.572388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.572406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.809 [2024-07-12 17:23:17.587543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:58.809 [2024-07-12 17:23:17.587562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.598715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.598734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.613196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.613214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.626899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.626917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.641024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.641043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.652311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.652329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.661222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.661239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.676026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.676044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.691216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.691234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.700091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.068 [2024-07-12 17:23:17.700109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.068 [2024-07-12 17:23:17.709102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.709120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.718648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.718666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.727822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.727840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.742499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.742517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.751409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.751429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.766020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.766038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.774956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.774974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.784201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.784218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.798678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.798696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.812230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.812248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.825990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.826008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.069 [2024-07-12 17:23:17.835075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.069 [2024-07-12 17:23:17.835092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.849573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.849591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.863487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.863505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.877630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.877659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.888454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.888471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.897339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.897356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.906584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.906602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.921219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.921237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.930369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.930392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.939752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.939770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.954198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.954216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.967972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.967990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.981841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.981859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:17.995376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:17.995399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.004396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.004413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.019065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.019083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.029522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.029540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.044283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.044302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.054859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.054877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.069138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.069156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.083100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.083118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.092117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.092134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.329 [2024-07-12 17:23:18.106813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.329 [2024-07-12 17:23:18.106832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.115628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.115647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.130399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.130417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.141277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.141294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.156019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.156037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.166974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.166992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.175869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.175887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.185698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.185715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.195053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.195070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.203742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.203759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.218047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.218065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.231498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.231516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.240365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.240388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.254768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.254786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.263857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.263875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.278192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.278210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.291530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.291548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.300606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.300624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.310014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.310032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.319284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.319302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.334179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.334197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.347854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.347873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.356522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.356540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.589 [2024-07-12 17:23:18.365896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.589 [2024-07-12 17:23:18.365914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.374683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.374702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.389410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.389429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.403462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.403491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.412433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.412451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.421817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.421835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.431232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.431251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.445897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.445916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.459999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.848 [2024-07-12 17:23:18.460018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.848 [2024-07-12 17:23:18.469363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.469388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.478492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.478511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.487748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.487767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.502354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.502373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.515999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.516017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.529707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.529726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.543992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.544011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.557927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.557946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.572031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.572052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.585746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.585770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.594883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.594902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.609564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.609582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:59.849 [2024-07-12 17:23:18.620333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:59.849 [2024-07-12 17:23:18.620350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.635173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.635192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.649068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.649087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.663184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.663202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.677219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.677237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.691387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.691405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.705641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.705670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.715980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.715999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.725119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.725137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.734525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.734544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.743322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.743340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.757838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.757857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.771680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.107 [2024-07-12 17:23:18.771699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.107 [2024-07-12 17:23:18.780737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.780754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.794927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.794945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.808512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.808539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.822455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.822478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.831471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.831489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.840618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.840635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.855527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.855545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.865969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.865987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.108 [2024-07-12 17:23:18.880277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.108 [2024-07-12 17:23:18.880295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.889309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.889328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.904304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.904321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.919357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.919381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.933849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.933867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.948223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.948242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.958892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.958910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.973360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.973385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.982335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.982354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:18.991637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:18.991655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.006248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.006266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.015091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.015108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.024524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.024542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.033435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.033453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.042224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.042246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.057364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.057388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.072575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.072594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.086388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.086406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.100664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.100682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.111790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.111807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.126165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.126183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.367 [2024-07-12 17:23:19.139817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.367 [2024-07-12 17:23:19.139835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.148845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.148864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.157699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.157717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.167065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.167082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.181507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.181526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.195382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.195401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.206172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.206190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.220632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.220651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.234268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.234286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.248117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.248136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.262331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.262350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.276367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.276390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.285503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.285524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.299507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.299525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.313318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.313335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.322411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.322429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.331160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.331177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.340578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.340596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.349177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.349195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.363936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.363954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.377860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.377878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.391938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.391956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.626 [2024-07-12 17:23:19.402934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.626 [2024-07-12 17:23:19.402952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.412121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.412138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.426769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.426787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.439975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.439994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.453913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.453932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.467812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.467830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.482120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.482138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.492904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.492922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.502154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.502172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.516908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.516927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.527964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.527982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.536796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.536814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.550848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.550865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.559926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.559945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.574067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.574085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.587833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.587853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.601598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.601616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.615361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.615383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.629204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.629222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.638084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.638102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:00.885 [2024-07-12 17:23:19.652346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:00.885 [2024-07-12 17:23:19.652364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.666269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.666288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.680201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.680219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.694111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.694129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.703164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.703182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.712027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.712044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.721546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.721564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.736322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.736340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.747102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.747120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.761395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.761414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.770329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.770347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.785064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.785083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.799174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.799194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.813459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.813479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.822306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.822325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.831781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.831800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.841775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.841794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.856588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.856606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 00:15:01.144 Latency(us) 00:15:01.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.144 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:01.144 Nvme1n1 : 5.01 16594.85 129.65 0.00 0.00 7706.32 3390.78 19945.74 00:15:01.144 =================================================================================================================== 00:15:01.144 Total : 16594.85 129.65 0.00 0.00 7706.32 3390.78 19945.74 00:15:01.144 [2024-07-12 17:23:19.868511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.868529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.880550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.880565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.892593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.892609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.144 [2024-07-12 17:23:19.904620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.144 [2024-07-12 17:23:19.904634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.924669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.924689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.932694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.932707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.944729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.944743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.956759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.956773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.968787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.968799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.980819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.980829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:19.992856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:19.992868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:20.004885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:20.004898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:20.016917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:20.016928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:20.029139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:20.029193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 [2024-07-12 17:23:20.040991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:01.403 [2024-07-12 17:23:20.041007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:01.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4041187) - No such process 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4041187 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.403 delay0 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.403 17:23:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:01.403 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.662 [2024-07-12 17:23:20.196527] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:08.229 [2024-07-12 17:23:26.296339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1519d00 is same with the state(5) to be set 00:15:08.229 [2024-07-12 17:23:26.296375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1519d00 is same with the state(5) to be set 00:15:08.229 Initializing NVMe Controllers 00:15:08.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.229 Initialization complete. Launching workers. 00:15:08.229 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 110 00:15:08.229 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 393, failed to submit 37 00:15:08.229 success 209, unsuccess 184, failed 0 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.229 rmmod nvme_tcp 00:15:08.229 rmmod nvme_fabrics 00:15:08.229 rmmod nvme_keyring 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4039135 ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4039135 ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4039135' 00:15:08.229 killing process with pid 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4039135 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.229 17:23:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.134 17:23:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.134 00:15:10.134 real 0m31.051s 00:15:10.134 user 0m42.681s 00:15:10.134 sys 0m10.121s 00:15:10.134 17:23:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.134 17:23:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:10.134 ************************************ 00:15:10.134 END TEST nvmf_zcopy 00:15:10.134 ************************************ 00:15:10.134 17:23:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.134 17:23:28 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:10.134 17:23:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.134 17:23:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.134 17:23:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.134 ************************************ 00:15:10.134 START TEST nvmf_nmic 00:15:10.134 ************************************ 00:15:10.134 17:23:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:10.134 * Looking for test storage... 00:15:10.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.135 17:23:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:15.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:15.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:15.411 Found net devices under 0000:86:00.0: cvl_0_0 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:15.411 Found net devices under 0000:86:00.1: cvl_0_1 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.411 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.412 17:23:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:15:15.412 00:15:15.412 --- 10.0.0.2 ping statistics --- 00:15:15.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.412 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:15:15.412 00:15:15.412 --- 10.0.0.1 ping statistics --- 00:15:15.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.412 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.412 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4046564 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4046564 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4046564 ']' 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.671 17:23:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:15.671 [2024-07-12 17:23:34.240245] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:15:15.671 [2024-07-12 17:23:34.240290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.671 [2024-07-12 17:23:34.297539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.671 [2024-07-12 17:23:34.380230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.671 [2024-07-12 17:23:34.380267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.671 [2024-07-12 17:23:34.380274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.671 [2024-07-12 17:23:34.380281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.671 [2024-07-12 17:23:34.380286] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.671 [2024-07-12 17:23:34.380321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.671 [2024-07-12 17:23:34.380420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.671 [2024-07-12 17:23:34.380446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.671 [2024-07-12 17:23:34.380447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 [2024-07-12 17:23:35.091435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 Malloc0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 [2024-07-12 17:23:35.143042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:16.609 test case1: single bdev can't be used in multiple subsystems 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 [2024-07-12 17:23:35.166978] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:16.609 [2024-07-12 17:23:35.166997] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:16.609 [2024-07-12 17:23:35.167004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.609 request: 00:15:16.609 { 00:15:16.609 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:16.609 "namespace": { 00:15:16.609 "bdev_name": "Malloc0", 00:15:16.609 "no_auto_visible": false 00:15:16.609 }, 00:15:16.609 "method": "nvmf_subsystem_add_ns", 00:15:16.609 "req_id": 1 00:15:16.609 } 00:15:16.609 Got JSON-RPC error response 00:15:16.609 response: 00:15:16.609 { 00:15:16.609 "code": -32602, 00:15:16.609 "message": "Invalid parameters" 00:15:16.609 } 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:16.609 Adding namespace failed - expected result. 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:16.609 test case2: host connect to nvmf target in multiple paths 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.609 [2024-07-12 17:23:35.179111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.609 17:23:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.987 17:23:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:18.924 17:23:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.924 17:23:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.924 17:23:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.924 17:23:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:18.924 17:23:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:20.831 17:23:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:20.831 [global] 00:15:20.831 thread=1 00:15:20.831 invalidate=1 00:15:20.831 rw=write 00:15:20.831 time_based=1 00:15:20.831 runtime=1 00:15:20.831 ioengine=libaio 00:15:20.831 direct=1 00:15:20.831 bs=4096 00:15:20.831 iodepth=1 00:15:20.831 norandommap=0 00:15:20.831 numjobs=1 00:15:20.831 00:15:20.831 verify_dump=1 00:15:20.831 verify_backlog=512 00:15:20.831 verify_state_save=0 00:15:20.831 do_verify=1 00:15:20.831 verify=crc32c-intel 00:15:20.831 [job0] 00:15:20.831 filename=/dev/nvme0n1 00:15:20.831 Could not set queue depth (nvme0n1) 00:15:21.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.088 fio-3.35 00:15:21.088 Starting 1 thread 00:15:22.464 00:15:22.464 job0: (groupid=0, jobs=1): err= 0: pid=4047596: Fri Jul 12 17:23:40 2024 00:15:22.464 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:15:22.464 slat (nsec): min=10501, max=25480, avg=21965.57, stdev=2735.86 00:15:22.464 clat (usec): min=40552, max=41974, avg=40995.70, stdev=240.79 00:15:22.464 lat (usec): min=40563, max=41997, avg=41017.67, stdev=241.97 00:15:22.464 clat percentiles (usec): 00:15:22.464 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:22.464 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:22.464 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:22.464 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:22.464 | 99.99th=[42206] 00:15:22.464 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:15:22.464 slat (nsec): min=10010, max=44840, avg=11340.81, stdev=2205.09 00:15:22.464 clat (usec): min=141, max=352, avg=159.14, stdev=14.23 00:15:22.464 lat (usec): min=152, max=390, avg=170.49, stdev=15.51 00:15:22.464 clat percentiles (usec): 00:15:22.464 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:15:22.464 | 30.00th=[ 155], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:15:22.464 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 176], 00:15:22.464 | 99.00th=[ 202], 99.50th=[ 243], 99.90th=[ 351], 99.95th=[ 351], 00:15:22.464 | 99.99th=[ 351] 00:15:22.464 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.464 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.464 lat (usec) : 250=95.33%, 500=0.37% 00:15:22.464 lat (msec) : 50=4.30% 00:15:22.464 cpu : usr=0.87%, sys=0.39%, ctx=535, majf=0, minf=2 00:15:22.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.464 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.464 00:15:22.464 Run status group 0 (all jobs): 00:15:22.464 READ: bw=89.1KiB/s (91.2kB/s), 89.1KiB/s-89.1KiB/s (91.2kB/s-91.2kB/s), io=92.0KiB (94.2kB), run=1033-1033msec 00:15:22.464 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:15:22.464 00:15:22.464 Disk stats (read/write): 00:15:22.464 nvme0n1: ios=69/512, merge=0/0, ticks=801/81, in_queue=882, util=91.08% 00:15:22.464 17:23:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:22.464 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:22.465 rmmod nvme_tcp 00:15:22.465 rmmod nvme_fabrics 00:15:22.465 rmmod nvme_keyring 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4046564 ']' 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4046564 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4046564 ']' 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4046564 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4046564 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4046564' 00:15:22.465 killing process with pid 4046564 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4046564 00:15:22.465 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4046564 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.723 17:23:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.284 17:23:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:25.284 00:15:25.284 real 0m14.745s 00:15:25.284 user 0m34.499s 00:15:25.284 sys 0m4.787s 00:15:25.285 17:23:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.285 17:23:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:25.285 ************************************ 00:15:25.285 END TEST nvmf_nmic 00:15:25.285 ************************************ 00:15:25.285 17:23:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:25.285 17:23:43 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:25.285 17:23:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:25.285 17:23:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.285 17:23:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:25.285 ************************************ 00:15:25.285 START TEST nvmf_fio_target 00:15:25.285 ************************************ 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:25.285 * Looking for test storage... 00:15:25.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:25.285 17:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:30.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:30.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:30.558 Found net devices under 0000:86:00.0: cvl_0_0 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:30.558 Found net devices under 0000:86:00.1: cvl_0_1 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.558 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:30.559 00:15:30.559 --- 10.0.0.2 ping statistics --- 00:15:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.559 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:15:30.559 00:15:30.559 --- 10.0.0.1 ping statistics --- 00:15:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.559 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4051181 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4051181 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4051181 ']' 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.559 17:23:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.559 [2024-07-12 17:23:49.006796] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:15:30.559 [2024-07-12 17:23:49.006835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.559 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.559 [2024-07-12 17:23:49.065750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.559 [2024-07-12 17:23:49.146200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.559 [2024-07-12 17:23:49.146233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.559 [2024-07-12 17:23:49.146240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.559 [2024-07-12 17:23:49.146246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.559 [2024-07-12 17:23:49.146251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.559 [2024-07-12 17:23:49.146292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.559 [2024-07-12 17:23:49.146395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.559 [2024-07-12 17:23:49.146426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.559 [2024-07-12 17:23:49.146426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.128 17:23:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.387 [2024-07-12 17:23:50.013881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.387 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.646 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:31.646 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.905 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:31.905 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:31.905 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:31.905 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.164 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:32.164 17:23:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:32.421 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.680 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:32.680 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.680 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:32.680 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:32.938 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:32.938 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:33.197 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:33.457 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:33.457 17:23:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.457 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:33.457 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.716 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.974 [2024-07-12 17:23:52.535565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.974 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:33.974 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:34.244 17:23:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:35.627 17:23:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:37.532 17:23:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:37.532 [global] 00:15:37.532 thread=1 00:15:37.532 invalidate=1 00:15:37.532 rw=write 00:15:37.532 time_based=1 00:15:37.532 runtime=1 00:15:37.532 ioengine=libaio 00:15:37.532 direct=1 00:15:37.532 bs=4096 00:15:37.532 iodepth=1 00:15:37.532 norandommap=0 00:15:37.532 numjobs=1 00:15:37.532 00:15:37.532 verify_dump=1 00:15:37.532 verify_backlog=512 00:15:37.532 verify_state_save=0 00:15:37.532 do_verify=1 00:15:37.532 verify=crc32c-intel 00:15:37.532 [job0] 00:15:37.532 filename=/dev/nvme0n1 00:15:37.532 [job1] 00:15:37.532 filename=/dev/nvme0n2 00:15:37.532 [job2] 00:15:37.532 filename=/dev/nvme0n3 00:15:37.532 [job3] 00:15:37.532 filename=/dev/nvme0n4 00:15:37.532 Could not set queue depth (nvme0n1) 00:15:37.532 Could not set queue depth (nvme0n2) 00:15:37.532 Could not set queue depth (nvme0n3) 00:15:37.532 Could not set queue depth (nvme0n4) 00:15:37.791 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.791 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.791 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.791 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:37.791 fio-3.35 00:15:37.791 Starting 4 threads 00:15:39.169 00:15:39.169 job0: (groupid=0, jobs=1): err= 0: pid=4052653: Fri Jul 12 17:23:57 2024 00:15:39.169 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:39.169 slat (nsec): min=6405, max=39216, avg=7850.47, stdev=1137.59 00:15:39.169 clat (usec): min=209, max=936, avg=245.51, stdev=24.84 00:15:39.169 lat (usec): min=216, max=944, avg=253.36, stdev=24.92 00:15:39.169 clat percentiles (usec): 00:15:39.169 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 233], 00:15:39.169 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:15:39.169 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 273], 00:15:39.169 | 99.00th=[ 314], 99.50th=[ 367], 99.90th=[ 474], 99.95th=[ 482], 00:15:39.169 | 99.99th=[ 938] 00:15:39.169 write: IOPS=2289, BW=9159KiB/s (9379kB/s)(9168KiB/1001msec); 0 zone resets 00:15:39.169 slat (nsec): min=9847, max=44526, avg=11741.57, stdev=1563.95 00:15:39.169 clat (usec): min=135, max=343, avg=192.59, stdev=36.52 00:15:39.169 lat (usec): min=147, max=355, avg=204.33, stdev=36.41 00:15:39.169 clat percentiles (usec): 00:15:39.169 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:15:39.169 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:15:39.169 | 70.00th=[ 221], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 249], 00:15:39.169 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 310], 00:15:39.169 | 99.99th=[ 343] 00:15:39.169 bw ( KiB/s): min= 8472, max= 8472, per=39.37%, avg=8472.00, stdev= 0.00, samples=1 00:15:39.169 iops : min= 2118, max= 2118, avg=2118.00, stdev= 0.00, samples=1 00:15:39.169 lat (usec) : 250=83.32%, 500=16.66%, 1000=0.02% 00:15:39.169 cpu : usr=2.50%, sys=7.40%, ctx=4342, majf=0, minf=1 00:15:39.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.169 issued rwts: total=2048,2292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.169 job1: (groupid=0, jobs=1): err= 0: pid=4052662: Fri Jul 12 17:23:57 2024 00:15:39.169 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:15:39.169 slat (nsec): min=11285, max=16494, avg=12897.95, stdev=1026.58 00:15:39.169 clat (usec): min=40869, max=41051, avg=40980.19, stdev=44.00 00:15:39.169 lat (usec): min=40885, max=41063, avg=40993.09, stdev=43.66 00:15:39.169 clat percentiles (usec): 00:15:39.169 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:39.169 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:39.169 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:39.169 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:39.169 | 99.99th=[41157] 00:15:39.169 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:15:39.169 slat (nsec): min=11415, max=41011, avg=13770.75, stdev=2232.48 00:15:39.169 clat (usec): min=148, max=420, avg=250.49, stdev=51.02 00:15:39.169 lat (usec): min=160, max=435, avg=264.26, stdev=51.15 00:15:39.169 clat percentiles (usec): 00:15:39.169 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 196], 20.00th=[ 217], 00:15:39.170 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:15:39.170 | 70.00th=[ 269], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 334], 00:15:39.170 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 420], 00:15:39.170 | 99.99th=[ 420] 00:15:39.170 bw ( KiB/s): min= 4096, max= 4096, per=19.03%, avg=4096.00, stdev= 0.00, samples=1 00:15:39.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:39.170 lat (usec) : 250=63.30%, 500=32.58% 00:15:39.170 lat (msec) : 50=4.12% 00:15:39.170 cpu : usr=0.48%, sys=0.96%, ctx=534, majf=0, minf=1 00:15:39.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.170 job2: (groupid=0, jobs=1): err= 0: pid=4052679: Fri Jul 12 17:23:57 2024 00:15:39.170 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:39.170 slat (nsec): min=7099, max=18820, avg=8129.48, stdev=917.87 00:15:39.170 clat (usec): min=210, max=474, avg=249.83, stdev=24.70 00:15:39.170 lat (usec): min=218, max=483, avg=257.96, stdev=24.76 00:15:39.170 clat percentiles (usec): 00:15:39.170 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:15:39.170 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:15:39.170 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:15:39.170 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 469], 00:15:39.170 | 99.99th=[ 474] 00:15:39.170 write: IOPS=2271, BW=9087KiB/s (9305kB/s)(9096KiB/1001msec); 0 zone resets 00:15:39.170 slat (nsec): min=10762, max=42878, avg=12300.54, stdev=1892.74 00:15:39.170 clat (usec): min=142, max=328, avg=188.67, stdev=28.62 00:15:39.170 lat (usec): min=155, max=340, avg=200.97, stdev=29.09 00:15:39.170 clat percentiles (usec): 00:15:39.170 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:15:39.170 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:15:39.170 | 70.00th=[ 198], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 241], 00:15:39.170 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 326], 00:15:39.170 | 99.99th=[ 330] 00:15:39.170 bw ( KiB/s): min= 8344, max= 8344, per=38.77%, avg=8344.00, stdev= 0.00, samples=1 00:15:39.170 iops : min= 2086, max= 2086, avg=2086.00, stdev= 0.00, samples=1 00:15:39.170 lat (usec) : 250=80.17%, 500=19.83% 00:15:39.170 cpu : usr=4.30%, sys=6.40%, ctx=4324, majf=0, minf=2 00:15:39.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 issued rwts: total=2048,2274,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.170 job3: (groupid=0, jobs=1): err= 0: pid=4052685: Fri Jul 12 17:23:57 2024 00:15:39.170 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:15:39.170 slat (nsec): min=9299, max=26561, avg=22289.73, stdev=3026.45 00:15:39.170 clat (usec): min=40883, max=42060, avg=41268.76, stdev=464.44 00:15:39.170 lat (usec): min=40906, max=42083, avg=41291.05, stdev=464.09 00:15:39.170 clat percentiles (usec): 00:15:39.170 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:39.170 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:39.170 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:39.170 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:39.170 | 99.99th=[42206] 00:15:39.170 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:15:39.170 slat (nsec): min=9411, max=42721, avg=10817.46, stdev=2013.00 00:15:39.170 clat (usec): min=155, max=413, avg=221.28, stdev=27.22 00:15:39.170 lat (usec): min=167, max=423, avg=232.10, stdev=27.49 00:15:39.170 clat percentiles (usec): 00:15:39.170 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 202], 00:15:39.170 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:15:39.170 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:15:39.170 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 412], 99.95th=[ 412], 00:15:39.170 | 99.99th=[ 412] 00:15:39.170 bw ( KiB/s): min= 4096, max= 4096, per=19.03%, avg=4096.00, stdev= 0.00, samples=1 00:15:39.170 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:39.170 lat (usec) : 250=87.27%, 500=8.61% 00:15:39.170 lat (msec) : 50=4.12% 00:15:39.170 cpu : usr=0.10%, sys=0.68%, ctx=535, majf=0, minf=1 00:15:39.170 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.170 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.170 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.170 00:15:39.170 Run status group 0 (all jobs): 00:15:39.170 READ: bw=15.6MiB/s (16.3MB/s), 84.7KiB/s-8184KiB/s (86.7kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1039msec 00:15:39.170 WRITE: bw=21.0MiB/s (22.0MB/s), 1971KiB/s-9159KiB/s (2018kB/s-9379kB/s), io=21.8MiB (22.9MB), run=1001-1039msec 00:15:39.170 00:15:39.170 Disk stats (read/write): 00:15:39.170 nvme0n1: ios=1672/2048, merge=0/0, ticks=1386/387, in_queue=1773, util=98.10% 00:15:39.170 nvme0n2: ios=22/512, merge=0/0, ticks=697/124, in_queue=821, util=86.79% 00:15:39.170 nvme0n3: ios=1672/2048, merge=0/0, ticks=1383/366, in_queue=1749, util=98.33% 00:15:39.170 nvme0n4: ios=41/512, merge=0/0, ticks=1690/107, in_queue=1797, util=98.21% 00:15:39.170 17:23:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:39.170 [global] 00:15:39.170 thread=1 00:15:39.170 invalidate=1 00:15:39.170 rw=randwrite 00:15:39.170 time_based=1 00:15:39.170 runtime=1 00:15:39.170 ioengine=libaio 00:15:39.170 direct=1 00:15:39.170 bs=4096 00:15:39.170 iodepth=1 00:15:39.170 norandommap=0 00:15:39.170 numjobs=1 00:15:39.170 00:15:39.170 verify_dump=1 00:15:39.170 verify_backlog=512 00:15:39.170 verify_state_save=0 00:15:39.170 do_verify=1 00:15:39.170 verify=crc32c-intel 00:15:39.170 [job0] 00:15:39.170 filename=/dev/nvme0n1 00:15:39.170 [job1] 00:15:39.170 filename=/dev/nvme0n2 00:15:39.170 [job2] 00:15:39.170 filename=/dev/nvme0n3 00:15:39.170 [job3] 00:15:39.170 filename=/dev/nvme0n4 00:15:39.170 Could not set queue depth (nvme0n1) 00:15:39.170 Could not set queue depth (nvme0n2) 00:15:39.170 Could not set queue depth (nvme0n3) 00:15:39.170 Could not set queue depth (nvme0n4) 00:15:39.429 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.429 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.429 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.429 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.429 fio-3.35 00:15:39.429 Starting 4 threads 00:15:40.830 00:15:40.830 job0: (groupid=0, jobs=1): err= 0: pid=4053099: Fri Jul 12 17:23:59 2024 00:15:40.830 read: IOPS=2190, BW=8763KiB/s (8974kB/s)(8772KiB/1001msec) 00:15:40.830 slat (nsec): min=6241, max=37562, avg=7200.91, stdev=1085.39 00:15:40.830 clat (usec): min=204, max=446, avg=249.22, stdev=16.32 00:15:40.830 lat (usec): min=211, max=454, avg=256.42, stdev=16.39 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:15:40.830 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:15:40.830 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:15:40.830 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 441], 99.95th=[ 445], 00:15:40.830 | 99.99th=[ 449] 00:15:40.830 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:40.830 slat (nsec): min=8816, max=45521, avg=10035.36, stdev=1223.23 00:15:40.830 clat (usec): min=120, max=280, avg=156.87, stdev=15.95 00:15:40.830 lat (usec): min=130, max=310, avg=166.91, stdev=16.21 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:15:40.830 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:15:40.830 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:15:40.830 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 245], 99.95th=[ 265], 00:15:40.830 | 99.99th=[ 281] 00:15:40.830 bw ( KiB/s): min=10520, max=10520, per=43.62%, avg=10520.00, stdev= 0.00, samples=1 00:15:40.830 iops : min= 2630, max= 2630, avg=2630.00, stdev= 0.00, samples=1 00:15:40.830 lat (usec) : 250=79.38%, 500=20.62% 00:15:40.830 cpu : usr=2.60%, sys=4.20%, ctx=4754, majf=0, minf=1 00:15:40.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 issued rwts: total=2193,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.830 job1: (groupid=0, jobs=1): err= 0: pid=4053115: Fri Jul 12 17:23:59 2024 00:15:40.830 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:15:40.830 slat (nsec): min=10249, max=27892, avg=12657.09, stdev=3985.57 00:15:40.830 clat (usec): min=8718, max=42012, avg=39662.32, stdev=6752.67 00:15:40.830 lat (usec): min=8729, max=42024, avg=39674.98, stdev=6752.93 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 8717], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:40.830 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:40.830 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:40.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:40.830 | 99.99th=[42206] 00:15:40.830 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:15:40.830 slat (nsec): min=9931, max=50572, avg=13318.65, stdev=4837.61 00:15:40.830 clat (usec): min=152, max=466, avg=191.46, stdev=28.07 00:15:40.830 lat (usec): min=164, max=480, avg=204.78, stdev=29.27 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:15:40.830 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:15:40.830 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 241], 00:15:40.830 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 465], 99.95th=[ 465], 00:15:40.830 | 99.99th=[ 465] 00:15:40.830 bw ( KiB/s): min= 4096, max= 4096, per=16.98%, avg=4096.00, stdev= 0.00, samples=1 00:15:40.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:40.830 lat (usec) : 250=93.27%, 500=2.43% 00:15:40.830 lat (msec) : 10=0.19%, 50=4.11% 00:15:40.830 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=2 00:15:40.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.830 job2: (groupid=0, jobs=1): err= 0: pid=4053124: Fri Jul 12 17:23:59 2024 00:15:40.830 read: IOPS=2027, BW=8112KiB/s (8307kB/s)(8120KiB/1001msec) 00:15:40.830 slat (nsec): min=7343, max=37731, avg=9259.84, stdev=1642.29 00:15:40.830 clat (usec): min=219, max=1343, avg=271.17, stdev=53.14 00:15:40.830 lat (usec): min=227, max=1353, avg=280.43, stdev=53.15 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:15:40.830 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:15:40.830 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 334], 00:15:40.830 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 519], 99.95th=[ 529], 00:15:40.830 | 99.99th=[ 1352] 00:15:40.830 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:40.830 slat (nsec): min=10737, max=40178, avg=13052.69, stdev=1947.41 00:15:40.830 clat (usec): min=143, max=768, avg=190.91, stdev=22.64 00:15:40.830 lat (usec): min=156, max=779, avg=203.96, stdev=22.77 00:15:40.830 clat percentiles (usec): 00:15:40.830 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:15:40.830 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:15:40.830 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 235], 00:15:40.830 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 289], 00:15:40.830 | 99.99th=[ 766] 00:15:40.830 bw ( KiB/s): min= 8448, max= 8448, per=35.03%, avg=8448.00, stdev= 0.00, samples=1 00:15:40.830 iops : min= 2112, max= 2112, avg=2112.00, stdev= 0.00, samples=1 00:15:40.830 lat (usec) : 250=64.17%, 500=35.61%, 750=0.17%, 1000=0.02% 00:15:40.830 lat (msec) : 2=0.02% 00:15:40.830 cpu : usr=4.00%, sys=6.40%, ctx=4079, majf=0, minf=1 00:15:40.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.830 issued rwts: total=2030,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.830 job3: (groupid=0, jobs=1): err= 0: pid=4053125: Fri Jul 12 17:23:59 2024 00:15:40.830 read: IOPS=517, BW=2069KiB/s (2118kB/s)(2108KiB/1019msec) 00:15:40.830 slat (nsec): min=7397, max=27714, avg=9892.57, stdev=2779.93 00:15:40.830 clat (usec): min=218, max=41399, avg=1490.52, stdev=6769.19 00:15:40.830 lat (usec): min=228, max=41409, avg=1500.42, stdev=6771.10 00:15:40.830 clat percentiles (usec): 00:15:40.831 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 262], 00:15:40.831 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 314], 00:15:40.831 | 70.00th=[ 424], 80.00th=[ 449], 90.00th=[ 469], 95.00th=[ 486], 00:15:40.831 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:40.831 | 99.99th=[41157] 00:15:40.831 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:15:40.831 slat (nsec): min=11048, max=37263, avg=13271.56, stdev=2026.18 00:15:40.831 clat (usec): min=155, max=509, avg=203.20, stdev=21.19 00:15:40.831 lat (usec): min=169, max=524, avg=216.47, stdev=21.55 00:15:40.831 clat percentiles (usec): 00:15:40.831 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:15:40.831 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:15:40.831 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 235], 00:15:40.831 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 371], 99.95th=[ 510], 00:15:40.831 | 99.99th=[ 510] 00:15:40.831 bw ( KiB/s): min= 8192, max= 8192, per=33.97%, avg=8192.00, stdev= 0.00, samples=1 00:15:40.831 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:40.831 lat (usec) : 250=68.60%, 500=30.05%, 750=0.39% 00:15:40.831 lat (msec) : 50=0.97% 00:15:40.831 cpu : usr=1.57%, sys=2.55%, ctx=1552, majf=0, minf=1 00:15:40.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.831 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.831 00:15:40.831 Run status group 0 (all jobs): 00:15:40.831 READ: bw=18.3MiB/s (19.2MB/s), 90.3KiB/s-8763KiB/s (92.5kB/s-8974kB/s), io=18.6MiB (19.6MB), run=1001-1019msec 00:15:40.831 WRITE: bw=23.6MiB/s (24.7MB/s), 2010KiB/s-9.99MiB/s (2058kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1019msec 00:15:40.831 00:15:40.831 Disk stats (read/write): 00:15:40.831 nvme0n1: ios=1965/2048, merge=0/0, ticks=1451/316, in_queue=1767, util=97.80% 00:15:40.831 nvme0n2: ios=23/512, merge=0/0, ticks=712/96, in_queue=808, util=86.67% 00:15:40.831 nvme0n3: ios=1562/2048, merge=0/0, ticks=1365/373, in_queue=1738, util=98.22% 00:15:40.831 nvme0n4: ios=545/1024, merge=0/0, ticks=1524/192, in_queue=1716, util=98.21% 00:15:40.831 17:23:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:40.831 [global] 00:15:40.831 thread=1 00:15:40.831 invalidate=1 00:15:40.831 rw=write 00:15:40.831 time_based=1 00:15:40.831 runtime=1 00:15:40.831 ioengine=libaio 00:15:40.831 direct=1 00:15:40.831 bs=4096 00:15:40.831 iodepth=128 00:15:40.831 norandommap=0 00:15:40.831 numjobs=1 00:15:40.831 00:15:40.831 verify_dump=1 00:15:40.831 verify_backlog=512 00:15:40.831 verify_state_save=0 00:15:40.831 do_verify=1 00:15:40.831 verify=crc32c-intel 00:15:40.831 [job0] 00:15:40.831 filename=/dev/nvme0n1 00:15:40.831 [job1] 00:15:40.831 filename=/dev/nvme0n2 00:15:40.831 [job2] 00:15:40.831 filename=/dev/nvme0n3 00:15:40.831 [job3] 00:15:40.831 filename=/dev/nvme0n4 00:15:40.831 Could not set queue depth (nvme0n1) 00:15:40.831 Could not set queue depth (nvme0n2) 00:15:40.831 Could not set queue depth (nvme0n3) 00:15:40.831 Could not set queue depth (nvme0n4) 00:15:41.091 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:41.091 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:41.091 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:41.091 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:41.091 fio-3.35 00:15:41.091 Starting 4 threads 00:15:42.462 00:15:42.462 job0: (groupid=0, jobs=1): err= 0: pid=4053500: Fri Jul 12 17:24:00 2024 00:15:42.462 read: IOPS=5311, BW=20.7MiB/s (21.8MB/s)(20.8MiB/1004msec) 00:15:42.462 slat (nsec): min=1064, max=9141.4k, avg=75415.59, stdev=480630.88 00:15:42.462 clat (usec): min=3656, max=22690, avg=10353.38, stdev=2098.89 00:15:42.462 lat (usec): min=3662, max=22697, avg=10428.80, stdev=2116.48 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 4047], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 9241], 00:15:42.462 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:15:42.462 | 70.00th=[10683], 80.00th=[11469], 90.00th=[13042], 95.00th=[13698], 00:15:42.462 | 99.00th=[18220], 99.50th=[19792], 99.90th=[19792], 99.95th=[19792], 00:15:42.462 | 99.99th=[22676] 00:15:42.462 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:15:42.462 slat (usec): min=2, max=17326, avg=90.96, stdev=632.34 00:15:42.462 clat (usec): min=368, max=116945, avg=12828.47, stdev=13833.19 00:15:42.462 lat (usec): min=425, max=116949, avg=12919.43, stdev=13908.72 00:15:42.462 clat percentiles (msec): 00:15:42.462 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 10], 00:15:42.462 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:15:42.462 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 15], 95.00th=[ 24], 00:15:42.462 | 99.00th=[ 99], 99.50th=[ 113], 99.90th=[ 117], 99.95th=[ 117], 00:15:42.462 | 99.99th=[ 117] 00:15:42.462 bw ( KiB/s): min=20408, max=24648, per=31.57%, avg=22528.00, stdev=2998.13, samples=2 00:15:42.462 iops : min= 5102, max= 6162, avg=5632.00, stdev=749.53, samples=2 00:15:42.462 lat (usec) : 500=0.04%, 750=0.03%, 1000=0.03% 00:15:42.462 lat (msec) : 2=0.22%, 4=1.84%, 10=39.64%, 20=54.74%, 50=2.10% 00:15:42.462 lat (msec) : 100=0.88%, 250=0.50% 00:15:42.462 cpu : usr=3.99%, sys=4.39%, ctx=561, majf=0, minf=1 00:15:42.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.462 issued rwts: total=5333,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.462 job1: (groupid=0, jobs=1): err= 0: pid=4053501: Fri Jul 12 17:24:00 2024 00:15:42.462 read: IOPS=5106, BW=19.9MiB/s (20.9MB/s)(20.8MiB/1045msec) 00:15:42.462 slat (nsec): min=1289, max=12384k, avg=88858.39, stdev=541673.36 00:15:42.462 clat (usec): min=3395, max=63105, avg=12385.05, stdev=7555.40 00:15:42.462 lat (usec): min=3405, max=63112, avg=12473.91, stdev=7579.47 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 9503], 00:15:42.462 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:15:42.462 | 70.00th=[10945], 80.00th=[12780], 90.00th=[17957], 95.00th=[24773], 00:15:42.462 | 99.00th=[55313], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:15:42.462 | 99.99th=[63177] 00:15:42.462 write: IOPS=5389, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1045msec); 0 zone resets 00:15:42.462 slat (usec): min=2, max=13496, avg=81.00, stdev=511.50 00:15:42.462 clat (usec): min=1201, max=54793, avg=11766.13, stdev=6492.85 00:15:42.462 lat (usec): min=1210, max=54800, avg=11847.13, stdev=6530.56 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 4293], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[ 9503], 00:15:42.462 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:15:42.462 | 70.00th=[10814], 80.00th=[12125], 90.00th=[15139], 95.00th=[19268], 00:15:42.462 | 99.00th=[46400], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:15:42.462 | 99.99th=[54789] 00:15:42.462 bw ( KiB/s): min=21064, max=23992, per=31.57%, avg=22528.00, stdev=2070.41, samples=2 00:15:42.462 iops : min= 5266, max= 5998, avg=5632.00, stdev=517.60, samples=2 00:15:42.462 lat (msec) : 2=0.02%, 4=0.41%, 10=35.90%, 20=56.59%, 50=6.10% 00:15:42.462 lat (msec) : 100=0.98% 00:15:42.462 cpu : usr=3.45%, sys=4.50%, ctx=658, majf=0, minf=1 00:15:42.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.462 issued rwts: total=5336,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.462 job2: (groupid=0, jobs=1): err= 0: pid=4053502: Fri Jul 12 17:24:00 2024 00:15:42.462 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:15:42.462 slat (nsec): min=1694, max=33162k, avg=175956.97, stdev=1518788.59 00:15:42.462 clat (usec): min=1406, max=104557, avg=23443.42, stdev=20482.75 00:15:42.462 lat (usec): min=1425, max=104564, avg=23619.38, stdev=20602.58 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 1565], 5.00th=[ 6259], 10.00th=[ 9110], 20.00th=[ 11207], 00:15:42.462 | 30.00th=[ 11600], 40.00th=[ 15008], 50.00th=[ 18482], 60.00th=[ 19530], 00:15:42.462 | 70.00th=[ 22152], 80.00th=[ 29754], 90.00th=[ 50594], 95.00th=[ 70779], 00:15:42.462 | 99.00th=[104334], 99.50th=[104334], 99.90th=[104334], 99.95th=[104334], 00:15:42.462 | 99.99th=[104334] 00:15:42.462 write: IOPS=3443, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1010msec); 0 zone resets 00:15:42.462 slat (usec): min=2, max=12383, avg=114.45, stdev=724.41 00:15:42.462 clat (usec): min=1074, max=60775, avg=16150.43, stdev=9245.79 00:15:42.462 lat (usec): min=1085, max=60781, avg=16264.89, stdev=9304.87 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 4113], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10683], 00:15:42.462 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[14746], 00:15:42.462 | 70.00th=[16909], 80.00th=[21627], 90.00th=[24511], 95.00th=[34341], 00:15:42.462 | 99.00th=[55313], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:15:42.462 | 99.99th=[60556] 00:15:42.462 bw ( KiB/s): min=10776, max=16032, per=18.78%, avg=13404.00, stdev=3716.55, samples=2 00:15:42.462 iops : min= 2694, max= 4008, avg=3351.00, stdev=929.14, samples=2 00:15:42.462 lat (msec) : 2=1.60%, 4=0.18%, 10=11.74%, 20=56.14%, 50=24.06% 00:15:42.462 lat (msec) : 100=5.80%, 250=0.47% 00:15:42.462 cpu : usr=2.68%, sys=3.57%, ctx=319, majf=0, minf=1 00:15:42.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.462 issued rwts: total=3072,3478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.462 job3: (groupid=0, jobs=1): err= 0: pid=4053503: Fri Jul 12 17:24:00 2024 00:15:42.462 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:15:42.462 slat (nsec): min=1457, max=13240k, avg=122368.51, stdev=837912.08 00:15:42.462 clat (usec): min=3681, max=44857, avg=14359.45, stdev=5375.55 00:15:42.462 lat (usec): min=3683, max=44865, avg=14481.82, stdev=5448.27 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 6128], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11863], 00:15:42.462 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13698], 00:15:42.462 | 70.00th=[13960], 80.00th=[14484], 90.00th=[19268], 95.00th=[26084], 00:15:42.462 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:15:42.462 | 99.99th=[44827] 00:15:42.462 write: IOPS=3859, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1011msec); 0 zone resets 00:15:42.462 slat (usec): min=2, max=13380, avg=137.08, stdev=733.27 00:15:42.462 clat (usec): min=1611, max=54698, avg=19680.88, stdev=9853.00 00:15:42.462 lat (usec): min=1629, max=54709, avg=19817.96, stdev=9924.24 00:15:42.462 clat percentiles (usec): 00:15:42.462 | 1.00th=[ 4424], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11338], 00:15:42.462 | 30.00th=[12125], 40.00th=[13173], 50.00th=[17433], 60.00th=[22414], 00:15:42.462 | 70.00th=[23987], 80.00th=[27657], 90.00th=[32113], 95.00th=[34341], 00:15:42.462 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:15:42.462 | 99.99th=[54789] 00:15:42.462 bw ( KiB/s): min=13824, max=16368, per=21.15%, avg=15096.00, stdev=1798.88, samples=2 00:15:42.462 iops : min= 3456, max= 4092, avg=3774.00, stdev=449.72, samples=2 00:15:42.462 lat (msec) : 2=0.03%, 4=0.73%, 10=7.88%, 20=63.24%, 50=27.06% 00:15:42.462 lat (msec) : 100=1.06% 00:15:42.462 cpu : usr=3.27%, sys=4.46%, ctx=385, majf=0, minf=1 00:15:42.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:42.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.462 issued rwts: total=3584,3902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.462 00:15:42.462 Run status group 0 (all jobs): 00:15:42.462 READ: bw=64.8MiB/s (67.9MB/s), 11.9MiB/s-20.7MiB/s (12.5MB/s-21.8MB/s), io=67.7MiB (71.0MB), run=1004-1045msec 00:15:42.462 WRITE: bw=69.7MiB/s (73.1MB/s), 13.5MiB/s-21.9MiB/s (14.1MB/s-23.0MB/s), io=72.8MiB (76.4MB), run=1004-1045msec 00:15:42.462 00:15:42.462 Disk stats (read/write): 00:15:42.462 nvme0n1: ios=4386/4608, merge=0/0, ticks=27494/40964, in_queue=68458, util=89.77% 00:15:42.462 nvme0n2: ios=4799/5120, merge=0/0, ticks=28385/28474, in_queue=56859, util=99.18% 00:15:42.462 nvme0n3: ios=2302/2560, merge=0/0, ticks=25545/26330, in_queue=51875, util=88.34% 00:15:42.462 nvme0n4: ios=3072/3295, merge=0/0, ticks=40121/60655, in_queue=100776, util=89.52% 00:15:42.462 17:24:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:42.462 [global] 00:15:42.462 thread=1 00:15:42.462 invalidate=1 00:15:42.462 rw=randwrite 00:15:42.462 time_based=1 00:15:42.462 runtime=1 00:15:42.462 ioengine=libaio 00:15:42.462 direct=1 00:15:42.462 bs=4096 00:15:42.462 iodepth=128 00:15:42.462 norandommap=0 00:15:42.462 numjobs=1 00:15:42.462 00:15:42.462 verify_dump=1 00:15:42.462 verify_backlog=512 00:15:42.462 verify_state_save=0 00:15:42.462 do_verify=1 00:15:42.462 verify=crc32c-intel 00:15:42.462 [job0] 00:15:42.462 filename=/dev/nvme0n1 00:15:42.462 [job1] 00:15:42.462 filename=/dev/nvme0n2 00:15:42.462 [job2] 00:15:42.462 filename=/dev/nvme0n3 00:15:42.462 [job3] 00:15:42.462 filename=/dev/nvme0n4 00:15:42.462 Could not set queue depth (nvme0n1) 00:15:42.462 Could not set queue depth (nvme0n2) 00:15:42.462 Could not set queue depth (nvme0n3) 00:15:42.462 Could not set queue depth (nvme0n4) 00:15:42.720 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.720 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.720 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.720 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:42.720 fio-3.35 00:15:42.720 Starting 4 threads 00:15:44.092 00:15:44.092 job0: (groupid=0, jobs=1): err= 0: pid=4053936: Fri Jul 12 17:24:02 2024 00:15:44.092 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:44.092 slat (nsec): min=1023, max=15324k, avg=103159.00, stdev=692178.43 00:15:44.092 clat (usec): min=5585, max=57490, avg=13179.81, stdev=5429.49 00:15:44.092 lat (usec): min=5590, max=57497, avg=13282.97, stdev=5493.68 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 6718], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10290], 00:15:44.092 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:15:44.092 | 70.00th=[14615], 80.00th=[15533], 90.00th=[17957], 95.00th=[19792], 00:15:44.092 | 99.00th=[39584], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:15:44.092 | 99.99th=[57410] 00:15:44.092 write: IOPS=4219, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1003msec); 0 zone resets 00:15:44.092 slat (nsec): min=1993, max=12688k, avg=116768.24, stdev=665396.41 00:15:44.092 clat (usec): min=366, max=59141, avg=17317.99, stdev=13792.89 00:15:44.092 lat (usec): min=531, max=59148, avg=17434.76, stdev=13883.71 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 750], 5.00th=[ 4228], 10.00th=[ 6128], 20.00th=[ 8094], 00:15:44.092 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10814], 60.00th=[13566], 00:15:44.092 | 70.00th=[17433], 80.00th=[25560], 90.00th=[44827], 95.00th=[48497], 00:15:44.092 | 99.00th=[54264], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:15:44.092 | 99.99th=[58983] 00:15:44.092 bw ( KiB/s): min=12263, max=20600, per=22.55%, avg=16431.50, stdev=5895.15, samples=2 00:15:44.092 iops : min= 3065, max= 5150, avg=4107.50, stdev=1474.32, samples=2 00:15:44.092 lat (usec) : 500=0.01%, 750=0.56%, 1000=0.44% 00:15:44.092 lat (msec) : 2=0.35%, 4=1.12%, 10=22.96%, 20=57.30%, 50=14.95% 00:15:44.092 lat (msec) : 100=2.31% 00:15:44.092 cpu : usr=2.59%, sys=4.09%, ctx=448, majf=0, minf=1 00:15:44.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:44.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.092 issued rwts: total=4096,4232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.092 job1: (groupid=0, jobs=1): err= 0: pid=4053937: Fri Jul 12 17:24:02 2024 00:15:44.092 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:15:44.092 slat (nsec): min=1340, max=7227.4k, avg=88013.83, stdev=455996.83 00:15:44.092 clat (usec): min=6886, max=28796, avg=11018.27, stdev=2340.20 00:15:44.092 lat (usec): min=7414, max=28821, avg=11106.28, stdev=2355.04 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:15:44.092 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:15:44.092 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[12911], 00:15:44.092 | 99.00th=[24773], 99.50th=[26084], 99.90th=[26084], 99.95th=[26346], 00:15:44.092 | 99.99th=[28705] 00:15:44.092 write: IOPS=6033, BW=23.6MiB/s (24.7MB/s)(23.6MiB/1001msec); 0 zone resets 00:15:44.092 slat (usec): min=2, max=4796, avg=78.55, stdev=383.02 00:15:44.092 clat (usec): min=492, max=16096, avg=10585.51, stdev=1184.85 00:15:44.092 lat (usec): min=3215, max=16130, avg=10664.06, stdev=1213.36 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 6980], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10159], 00:15:44.092 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:15:44.092 | 70.00th=[10683], 80.00th=[10945], 90.00th=[12125], 95.00th=[12518], 00:15:44.092 | 99.00th=[13698], 99.50th=[14353], 99.90th=[15926], 99.95th=[15926], 00:15:44.092 | 99.99th=[16057] 00:15:44.092 bw ( KiB/s): min=24576, max=24576, per=33.74%, avg=24576.00, stdev= 0.00, samples=1 00:15:44.092 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:15:44.092 lat (usec) : 500=0.01% 00:15:44.092 lat (msec) : 4=0.36%, 10=18.27%, 20=80.52%, 50=0.85% 00:15:44.092 cpu : usr=3.20%, sys=5.40%, ctx=685, majf=0, minf=1 00:15:44.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:44.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.092 issued rwts: total=5632,6040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.092 job2: (groupid=0, jobs=1): err= 0: pid=4053938: Fri Jul 12 17:24:02 2024 00:15:44.092 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:15:44.092 slat (nsec): min=1324, max=16281k, avg=112705.44, stdev=687504.89 00:15:44.092 clat (usec): min=9075, max=36592, avg=14541.50, stdev=3813.33 00:15:44.092 lat (usec): min=9080, max=36599, avg=14654.21, stdev=3869.32 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11207], 20.00th=[12387], 00:15:44.092 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:15:44.092 | 70.00th=[14484], 80.00th=[15139], 90.00th=[17433], 95.00th=[20841], 00:15:44.092 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33162], 99.95th=[35390], 00:15:44.092 | 99.99th=[36439] 00:15:44.092 write: IOPS=3828, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1007msec); 0 zone resets 00:15:44.092 slat (usec): min=2, max=20699, avg=147.28, stdev=883.65 00:15:44.092 clat (usec): min=3522, max=56418, avg=19373.30, stdev=11193.52 00:15:44.092 lat (usec): min=6038, max=56427, avg=19520.58, stdev=11266.18 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11600], 00:15:44.092 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13829], 60.00th=[16712], 00:15:44.092 | 70.00th=[21627], 80.00th=[27657], 90.00th=[35914], 95.00th=[45351], 00:15:44.092 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:15:44.092 | 99.99th=[56361] 00:15:44.092 bw ( KiB/s): min=13880, max=15936, per=20.46%, avg=14908.00, stdev=1453.81, samples=2 00:15:44.092 iops : min= 3470, max= 3984, avg=3727.00, stdev=363.45, samples=2 00:15:44.092 lat (msec) : 4=0.01%, 10=4.49%, 20=73.99%, 50=20.08%, 100=1.42% 00:15:44.092 cpu : usr=3.78%, sys=5.17%, ctx=339, majf=0, minf=1 00:15:44.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:44.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.092 issued rwts: total=3584,3855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.092 job3: (groupid=0, jobs=1): err= 0: pid=4053939: Fri Jul 12 17:24:02 2024 00:15:44.092 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:15:44.092 slat (nsec): min=1429, max=13854k, avg=103419.73, stdev=592899.22 00:15:44.092 clat (usec): min=6813, max=38087, avg=12902.61, stdev=2973.44 00:15:44.092 lat (usec): min=6830, max=38101, avg=13006.03, stdev=3035.18 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[11207], 00:15:44.092 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12256], 60.00th=[13173], 00:15:44.092 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15270], 95.00th=[16712], 00:15:44.092 | 99.00th=[24249], 99.50th=[29754], 99.90th=[34866], 99.95th=[34866], 00:15:44.092 | 99.99th=[38011] 00:15:44.092 write: IOPS=4192, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec); 0 zone resets 00:15:44.092 slat (usec): min=2, max=23742, avg=130.54, stdev=850.38 00:15:44.092 clat (usec): min=1132, max=64839, avg=17616.35, stdev=13257.52 00:15:44.092 lat (usec): min=5383, max=64851, avg=17746.89, stdev=13361.88 00:15:44.092 clat percentiles (usec): 00:15:44.092 | 1.00th=[ 6521], 5.00th=[ 9241], 10.00th=[10814], 20.00th=[11207], 00:15:44.093 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:15:44.093 | 70.00th=[12518], 80.00th=[20317], 90.00th=[39584], 95.00th=[53216], 00:15:44.093 | 99.00th=[62129], 99.50th=[63701], 99.90th=[64750], 99.95th=[64750], 00:15:44.093 | 99.99th=[64750] 00:15:44.093 bw ( KiB/s): min=11288, max=21485, per=22.49%, avg=16386.50, stdev=7210.37, samples=2 00:15:44.093 iops : min= 2822, max= 5371, avg=4096.50, stdev=1802.42, samples=2 00:15:44.093 lat (msec) : 2=0.01%, 10=6.35%, 20=81.14%, 50=9.69%, 100=2.80% 00:15:44.093 cpu : usr=4.38%, sys=5.28%, ctx=382, majf=0, minf=1 00:15:44.093 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:44.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.093 issued rwts: total=4096,4213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.093 00:15:44.093 Run status group 0 (all jobs): 00:15:44.093 READ: bw=67.5MiB/s (70.8MB/s), 13.9MiB/s-22.0MiB/s (14.6MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1001-1007msec 00:15:44.093 WRITE: bw=71.1MiB/s (74.6MB/s), 15.0MiB/s-23.6MiB/s (15.7MB/s-24.7MB/s), io=71.6MiB (75.1MB), run=1001-1007msec 00:15:44.093 00:15:44.093 Disk stats (read/write): 00:15:44.093 nvme0n1: ios=3121/3430, merge=0/0, ticks=38018/59815, in_queue=97833, util=86.96% 00:15:44.093 nvme0n2: ios=4803/5120, merge=0/0, ticks=18619/17092, in_queue=35711, util=95.94% 00:15:44.093 nvme0n3: ios=3113/3418, merge=0/0, ticks=22463/29925, in_queue=52388, util=96.15% 00:15:44.093 nvme0n4: ios=3627/3895, merge=0/0, ticks=23492/28472, in_queue=51964, util=96.86% 00:15:44.093 17:24:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:44.093 17:24:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4054234 00:15:44.093 17:24:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:44.093 17:24:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:44.093 [global] 00:15:44.093 thread=1 00:15:44.093 invalidate=1 00:15:44.093 rw=read 00:15:44.093 time_based=1 00:15:44.093 runtime=10 00:15:44.093 ioengine=libaio 00:15:44.093 direct=1 00:15:44.093 bs=4096 00:15:44.093 iodepth=1 00:15:44.093 norandommap=1 00:15:44.093 numjobs=1 00:15:44.093 00:15:44.093 [job0] 00:15:44.093 filename=/dev/nvme0n1 00:15:44.093 [job1] 00:15:44.093 filename=/dev/nvme0n2 00:15:44.093 [job2] 00:15:44.093 filename=/dev/nvme0n3 00:15:44.093 [job3] 00:15:44.093 filename=/dev/nvme0n4 00:15:44.093 Could not set queue depth (nvme0n1) 00:15:44.093 Could not set queue depth (nvme0n2) 00:15:44.093 Could not set queue depth (nvme0n3) 00:15:44.093 Could not set queue depth (nvme0n4) 00:15:44.093 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.093 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.093 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.093 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.093 fio-3.35 00:15:44.093 Starting 4 threads 00:15:47.378 17:24:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:47.378 17:24:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:47.378 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=18493440, buflen=4096 00:15:47.379 fio: pid=4054377, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:47.379 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=21536768, buflen=4096 00:15:47.379 fio: pid=4054375, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:47.379 17:24:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.379 17:24:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:47.379 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=33767424, buflen=4096 00:15:47.379 fio: pid=4054373, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:47.379 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.379 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:47.639 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.639 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:47.639 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=41922560, buflen=4096 00:15:47.639 fio: pid=4054374, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:47.639 00:15:47.639 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4054373: Fri Jul 12 17:24:06 2024 00:15:47.639 read: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(32.2MiB/3077msec) 00:15:47.639 slat (usec): min=5, max=24381, avg=11.93, stdev=318.60 00:15:47.639 clat (usec): min=204, max=41897, avg=357.57, stdev=1794.23 00:15:47.639 lat (usec): min=211, max=41919, avg=369.50, stdev=1823.10 00:15:47.639 clat percentiles (usec): 00:15:47.639 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:15:47.639 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 285], 00:15:47.639 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 338], 00:15:47.639 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[41157], 99.95th=[41157], 00:15:47.639 | 99.99th=[41681] 00:15:47.639 bw ( KiB/s): min= 136, max=14696, per=29.59%, avg=10137.60, stdev=6006.92, samples=5 00:15:47.639 iops : min= 34, max= 3674, avg=2534.40, stdev=1501.73, samples=5 00:15:47.639 lat (usec) : 250=23.89%, 500=75.54%, 750=0.36% 00:15:47.639 lat (msec) : 50=0.19% 00:15:47.639 cpu : usr=0.88%, sys=2.24%, ctx=8247, majf=0, minf=1 00:15:47.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 issued rwts: total=8245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.639 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=4054374: Fri Jul 12 17:24:06 2024 00:15:47.639 read: IOPS=3102, BW=12.1MiB/s (12.7MB/s)(40.0MiB/3299msec) 00:15:47.639 slat (usec): min=5, max=21199, avg=12.98, stdev=286.44 00:15:47.639 clat (usec): min=203, max=42013, avg=308.15, stdev=1076.77 00:15:47.639 lat (usec): min=210, max=42035, avg=320.44, stdev=1113.22 00:15:47.639 clat percentiles (usec): 00:15:47.639 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:15:47.639 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:15:47.639 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 334], 00:15:47.639 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 635], 99.95th=[40633], 00:15:47.639 | 99.99th=[41681] 00:15:47.639 bw ( KiB/s): min=10792, max=14688, per=38.26%, avg=13105.33, stdev=1311.51, samples=6 00:15:47.639 iops : min= 2698, max= 3672, avg=3276.33, stdev=327.88, samples=6 00:15:47.639 lat (usec) : 250=21.27%, 500=78.34%, 750=0.29% 00:15:47.639 lat (msec) : 2=0.01%, 20=0.01%, 50=0.07% 00:15:47.639 cpu : usr=0.88%, sys=2.91%, ctx=10241, majf=0, minf=1 00:15:47.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 issued rwts: total=10236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.639 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4054375: Fri Jul 12 17:24:06 2024 00:15:47.639 read: IOPS=1816, BW=7265KiB/s (7439kB/s)(20.5MiB/2895msec) 00:15:47.639 slat (usec): min=5, max=15631, avg=13.38, stdev=299.14 00:15:47.639 clat (usec): min=191, max=41096, avg=532.04, stdev=2959.01 00:15:47.639 lat (usec): min=199, max=41121, avg=545.42, stdev=2974.90 00:15:47.639 clat percentiles (usec): 00:15:47.639 | 1.00th=[ 221], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 289], 00:15:47.639 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:15:47.639 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[ 449], 00:15:47.639 | 99.00th=[ 494], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:15:47.639 | 99.99th=[41157] 00:15:47.639 bw ( KiB/s): min= 96, max=12952, per=19.21%, avg=6580.80, stdev=5439.11, samples=5 00:15:47.639 iops : min= 24, max= 3238, avg=1645.20, stdev=1359.78, samples=5 00:15:47.639 lat (usec) : 250=3.16%, 500=95.95%, 750=0.30%, 1000=0.02% 00:15:47.639 lat (msec) : 10=0.02%, 50=0.53% 00:15:47.639 cpu : usr=0.45%, sys=1.73%, ctx=5263, majf=0, minf=1 00:15:47.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 issued rwts: total=5259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.639 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4054377: Fri Jul 12 17:24:06 2024 00:15:47.639 read: IOPS=1667, BW=6667KiB/s (6827kB/s)(17.6MiB/2709msec) 00:15:47.639 slat (nsec): min=6026, max=33896, avg=7535.74, stdev=1395.63 00:15:47.639 clat (usec): min=222, max=41994, avg=585.68, stdev=3373.01 00:15:47.639 lat (usec): min=229, max=42005, avg=593.22, stdev=3373.56 00:15:47.639 clat percentiles (usec): 00:15:47.639 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:15:47.639 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:15:47.639 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 392], 95.00th=[ 441], 00:15:47.639 | 99.00th=[ 490], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:15:47.639 | 99.99th=[42206] 00:15:47.639 bw ( KiB/s): min= 96, max=11512, per=21.07%, avg=7216.00, stdev=4965.98, samples=5 00:15:47.639 iops : min= 24, max= 2878, avg=1804.00, stdev=1241.50, samples=5 00:15:47.639 lat (usec) : 250=1.46%, 500=97.63%, 750=0.20% 00:15:47.639 lat (msec) : 50=0.69% 00:15:47.639 cpu : usr=0.44%, sys=1.55%, ctx=4517, majf=0, minf=2 00:15:47.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.639 issued rwts: total=4516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.639 00:15:47.639 Run status group 0 (all jobs): 00:15:47.640 READ: bw=33.5MiB/s (35.1MB/s), 6667KiB/s-12.1MiB/s (6827kB/s-12.7MB/s), io=110MiB (116MB), run=2709-3299msec 00:15:47.640 00:15:47.640 Disk stats (read/write): 00:15:47.640 nvme0n1: ios=7465/0, merge=0/0, ticks=2723/0, in_queue=2723, util=94.02% 00:15:47.640 nvme0n2: ios=10148/0, merge=0/0, ticks=2908/0, in_queue=2908, util=94.52% 00:15:47.640 nvme0n3: ios=5219/0, merge=0/0, ticks=3678/0, in_queue=3678, util=97.90% 00:15:47.640 nvme0n4: ios=4555/0, merge=0/0, ticks=3463/0, in_queue=3463, util=100.00% 00:15:47.946 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.946 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:47.946 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:47.946 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:48.204 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:48.204 17:24:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4054234 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:48.463 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:48.721 nvmf hotplug test: fio failed as expected 00:15:48.721 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.979 rmmod nvme_tcp 00:15:48.979 rmmod nvme_fabrics 00:15:48.979 rmmod nvme_keyring 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4051181 ']' 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4051181 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4051181 ']' 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4051181 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4051181 00:15:48.979 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:48.980 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:48.980 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4051181' 00:15:48.980 killing process with pid 4051181 00:15:48.980 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4051181 00:15:48.980 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4051181 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.238 17:24:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.140 17:24:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:51.140 00:15:51.140 real 0m26.355s 00:15:51.140 user 1m47.001s 00:15:51.140 sys 0m8.205s 00:15:51.140 17:24:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.140 17:24:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.140 ************************************ 00:15:51.140 END TEST nvmf_fio_target 00:15:51.140 ************************************ 00:15:51.140 17:24:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:51.140 17:24:09 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:51.140 17:24:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.398 17:24:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.398 17:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.398 ************************************ 00:15:51.398 START TEST nvmf_bdevio 00:15:51.398 ************************************ 00:15:51.398 17:24:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:51.398 * Looking for test storage... 00:15:51.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.398 17:24:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.399 17:24:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:56.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:56.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:56.669 Found net devices under 0000:86:00.0: cvl_0_0 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:56.669 Found net devices under 0000:86:00.1: cvl_0_1 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.669 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:15:56.669 00:15:56.669 --- 10.0.0.2 ping statistics --- 00:15:56.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.670 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:56.670 00:15:56.670 --- 10.0.0.1 ping statistics --- 00:15:56.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.670 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4058988 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4058988 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4058988 ']' 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.670 17:24:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:56.925 [2024-07-12 17:24:15.461629] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:15:56.925 [2024-07-12 17:24:15.461674] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.925 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.925 [2024-07-12 17:24:15.518962] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.925 [2024-07-12 17:24:15.599287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.925 [2024-07-12 17:24:15.599322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.925 [2024-07-12 17:24:15.599329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.925 [2024-07-12 17:24:15.599335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.925 [2024-07-12 17:24:15.599342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.925 [2024-07-12 17:24:15.599407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:56.925 [2024-07-12 17:24:15.599513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:56.925 [2024-07-12 17:24:15.599618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.925 [2024-07-12 17:24:15.599619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 [2024-07-12 17:24:16.314222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 Malloc0 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:57.852 [2024-07-12 17:24:16.357535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:57.852 { 00:15:57.852 "params": { 00:15:57.852 "name": "Nvme$subsystem", 00:15:57.852 "trtype": "$TEST_TRANSPORT", 00:15:57.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.852 "adrfam": "ipv4", 00:15:57.852 "trsvcid": "$NVMF_PORT", 00:15:57.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.852 "hdgst": ${hdgst:-false}, 00:15:57.852 "ddgst": ${ddgst:-false} 00:15:57.852 }, 00:15:57.852 "method": "bdev_nvme_attach_controller" 00:15:57.852 } 00:15:57.852 EOF 00:15:57.852 )") 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:57.852 17:24:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:57.852 "params": { 00:15:57.852 "name": "Nvme1", 00:15:57.852 "trtype": "tcp", 00:15:57.852 "traddr": "10.0.0.2", 00:15:57.852 "adrfam": "ipv4", 00:15:57.852 "trsvcid": "4420", 00:15:57.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.852 "hdgst": false, 00:15:57.852 "ddgst": false 00:15:57.852 }, 00:15:57.852 "method": "bdev_nvme_attach_controller" 00:15:57.852 }' 00:15:57.852 [2024-07-12 17:24:16.407624] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:15:57.852 [2024-07-12 17:24:16.407664] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059139 ] 00:15:57.852 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.852 [2024-07-12 17:24:16.460711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.852 [2024-07-12 17:24:16.536337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.852 [2024-07-12 17:24:16.536440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.852 [2024-07-12 17:24:16.536443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.108 I/O targets: 00:15:58.108 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:58.108 00:15:58.108 00:15:58.108 CUnit - A unit testing framework for C - Version 2.1-3 00:15:58.108 http://cunit.sourceforge.net/ 00:15:58.108 00:15:58.108 00:15:58.108 Suite: bdevio tests on: Nvme1n1 00:15:58.108 Test: blockdev write read block ...passed 00:15:58.363 Test: blockdev write zeroes read block ...passed 00:15:58.363 Test: blockdev write zeroes read no split ...passed 00:15:58.363 Test: blockdev write zeroes read split ...passed 00:15:58.363 Test: blockdev write zeroes read split partial ...passed 00:15:58.363 Test: blockdev reset ...[2024-07-12 17:24:17.010061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:58.363 [2024-07-12 17:24:17.010122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219e6d0 (9): Bad file descriptor 00:15:58.363 [2024-07-12 17:24:17.069775] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.363 passed 00:15:58.363 Test: blockdev write read 8 blocks ...passed 00:15:58.363 Test: blockdev write read size > 128k ...passed 00:15:58.363 Test: blockdev write read invalid size ...passed 00:15:58.619 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.619 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.619 Test: blockdev write read max offset ...passed 00:15:58.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.619 Test: blockdev writev readv 8 blocks ...passed 00:15:58.619 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.619 Test: blockdev writev readv block ...passed 00:15:58.619 Test: blockdev writev readv size > 128k ...passed 00:15:58.619 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.619 Test: blockdev comparev and writev ...[2024-07-12 17:24:17.283359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.283405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.283651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.283672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.283932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.283953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.283960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.284207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.284216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:58.619 [2024-07-12 17:24:17.284227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.619 [2024-07-12 17:24:17.284234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:58.619 passed 00:15:58.619 Test: blockdev nvme passthru rw ...passed 00:15:58.620 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:24:17.367665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.620 [2024-07-12 17:24:17.367679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:58.620 [2024-07-12 17:24:17.367799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.620 [2024-07-12 17:24:17.367808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:58.620 [2024-07-12 17:24:17.367930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.620 [2024-07-12 17:24:17.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:58.620 [2024-07-12 17:24:17.368055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.620 [2024-07-12 17:24:17.368064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:58.620 passed 00:15:58.620 Test: blockdev nvme admin passthru ...passed 00:15:58.876 Test: blockdev copy ...passed 00:15:58.876 00:15:58.876 Run Summary: Type Total Ran Passed Failed Inactive 00:15:58.876 suites 1 1 n/a 0 0 00:15:58.876 tests 23 23 23 0 0 00:15:58.876 asserts 152 152 152 0 n/a 00:15:58.876 00:15:58.877 Elapsed time = 1.222 seconds 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.877 rmmod nvme_tcp 00:15:58.877 rmmod nvme_fabrics 00:15:58.877 rmmod nvme_keyring 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4058988 ']' 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4058988 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4058988 ']' 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4058988 00:15:58.877 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4058988 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4058988' 00:15:59.135 killing process with pid 4058988 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4058988 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4058988 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.135 17:24:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.669 17:24:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.669 00:16:01.669 real 0m10.013s 00:16:01.669 user 0m13.077s 00:16:01.669 sys 0m4.530s 00:16:01.669 17:24:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.669 17:24:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:01.669 ************************************ 00:16:01.669 END TEST nvmf_bdevio 00:16:01.669 ************************************ 00:16:01.669 17:24:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.669 17:24:20 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.669 17:24:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.669 17:24:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.669 17:24:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.669 ************************************ 00:16:01.669 START TEST nvmf_auth_target 00:16:01.669 ************************************ 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:01.669 * Looking for test storage... 00:16:01.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.669 17:24:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.670 17:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:06.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:06.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:06.955 Found net devices under 0000:86:00.0: cvl_0_0 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:06.955 Found net devices under 0000:86:00.1: cvl_0_1 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.955 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:16:06.956 00:16:06.956 --- 10.0.0.2 ping statistics --- 00:16:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.956 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:16:06.956 00:16:06.956 --- 10.0.0.1 ping statistics --- 00:16:06.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.956 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4062766 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4062766 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4062766 ']' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.956 17:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4062899 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=834424212c7973dd04bba6d12fa79a511e90f69a681904a7 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UBS 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 834424212c7973dd04bba6d12fa79a511e90f69a681904a7 0 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 834424212c7973dd04bba6d12fa79a511e90f69a681904a7 0 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=834424212c7973dd04bba6d12fa79a511e90f69a681904a7 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UBS 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UBS 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UBS 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc679b976e067b54516ba71a711570c54d05c7dacc96c4c5e4bcbd38b8be3291 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sv7 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc679b976e067b54516ba71a711570c54d05c7dacc96c4c5e4bcbd38b8be3291 3 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc679b976e067b54516ba71a711570c54d05c7dacc96c4c5e4bcbd38b8be3291 3 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc679b976e067b54516ba71a711570c54d05c7dacc96c4c5e4bcbd38b8be3291 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:07.525 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sv7 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sv7 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.sv7 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=012458a4480f360f083cdbecd836e861 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.kXf 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 012458a4480f360f083cdbecd836e861 1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 012458a4480f360f083cdbecd836e861 1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=012458a4480f360f083cdbecd836e861 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.kXf 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.kXf 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.kXf 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e693e79ec0c86364f6d3525fe8441d7077e71edb20d7c01d 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mYs 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e693e79ec0c86364f6d3525fe8441d7077e71edb20d7c01d 2 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e693e79ec0c86364f6d3525fe8441d7077e71edb20d7c01d 2 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e693e79ec0c86364f6d3525fe8441d7077e71edb20d7c01d 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mYs 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mYs 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mYs 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c220c7459f0e07d06681cfb08206d8fb75665cc8ba12996 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RiA 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c220c7459f0e07d06681cfb08206d8fb75665cc8ba12996 2 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c220c7459f0e07d06681cfb08206d8fb75665cc8ba12996 2 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c220c7459f0e07d06681cfb08206d8fb75665cc8ba12996 00:16:07.785 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RiA 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RiA 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.RiA 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f666870f9a561d5b83e7a7a4bfdbbc92 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TSd 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f666870f9a561d5b83e7a7a4bfdbbc92 1 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f666870f9a561d5b83e7a7a4bfdbbc92 1 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f666870f9a561d5b83e7a7a4bfdbbc92 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:07.786 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TSd 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TSd 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.TSd 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77ca6540ee42a6403cd76a156fc9a0ca5c56d8e9c1f34e136a39d25e09def73d 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YyN 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77ca6540ee42a6403cd76a156fc9a0ca5c56d8e9c1f34e136a39d25e09def73d 3 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77ca6540ee42a6403cd76a156fc9a0ca5c56d8e9c1f34e136a39d25e09def73d 3 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77ca6540ee42a6403cd76a156fc9a0ca5c56d8e9c1f34e136a39d25e09def73d 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YyN 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YyN 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.YyN 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4062766 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4062766 ']' 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4062899 /var/tmp/host.sock 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4062899 ']' 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:08.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.045 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.305 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.305 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:08.305 17:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:08.305 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.305 17:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UBS 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UBS 00:16:08.305 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UBS 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.sv7 ]] 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sv7 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sv7 00:16:08.564 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sv7 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kXf 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kXf 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kXf 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mYs ]] 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mYs 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mYs 00:16:08.823 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mYs 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RiA 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RiA 00:16:09.083 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RiA 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.TSd ]] 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TSd 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TSd 00:16:09.342 17:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TSd 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YyN 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.YyN 00:16:09.342 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.YyN 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.601 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.861 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.119 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.119 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.120 { 00:16:10.120 "cntlid": 1, 00:16:10.120 "qid": 0, 00:16:10.120 "state": "enabled", 00:16:10.120 "thread": "nvmf_tgt_poll_group_000", 00:16:10.120 "listen_address": { 00:16:10.120 "trtype": "TCP", 00:16:10.120 "adrfam": "IPv4", 00:16:10.120 "traddr": "10.0.0.2", 00:16:10.120 "trsvcid": "4420" 00:16:10.120 }, 00:16:10.120 "peer_address": { 00:16:10.120 "trtype": "TCP", 00:16:10.120 "adrfam": "IPv4", 00:16:10.120 "traddr": "10.0.0.1", 00:16:10.120 "trsvcid": "38386" 00:16:10.120 }, 00:16:10.120 "auth": { 00:16:10.120 "state": "completed", 00:16:10.120 "digest": "sha256", 00:16:10.120 "dhgroup": "null" 00:16:10.120 } 00:16:10.120 } 00:16:10.120 ]' 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.120 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.378 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.378 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.378 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.378 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.378 17:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.378 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.945 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.204 17:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.509 00:16:11.509 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.509 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.509 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.795 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.796 { 00:16:11.796 "cntlid": 3, 00:16:11.796 "qid": 0, 00:16:11.796 "state": "enabled", 00:16:11.796 "thread": "nvmf_tgt_poll_group_000", 00:16:11.796 "listen_address": { 00:16:11.796 "trtype": "TCP", 00:16:11.796 "adrfam": "IPv4", 00:16:11.796 "traddr": "10.0.0.2", 00:16:11.796 "trsvcid": "4420" 00:16:11.796 }, 00:16:11.796 "peer_address": { 00:16:11.796 "trtype": "TCP", 00:16:11.796 "adrfam": "IPv4", 00:16:11.796 "traddr": "10.0.0.1", 00:16:11.796 "trsvcid": "38412" 00:16:11.796 }, 00:16:11.796 "auth": { 00:16:11.796 "state": "completed", 00:16:11.796 "digest": "sha256", 00:16:11.796 "dhgroup": "null" 00:16:11.796 } 00:16:11.796 } 00:16:11.796 ]' 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.796 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.054 17:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.621 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.879 00:16:12.879 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.879 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.879 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.137 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.137 { 00:16:13.137 "cntlid": 5, 00:16:13.137 "qid": 0, 00:16:13.137 "state": "enabled", 00:16:13.137 "thread": "nvmf_tgt_poll_group_000", 00:16:13.137 "listen_address": { 00:16:13.137 "trtype": "TCP", 00:16:13.137 "adrfam": "IPv4", 00:16:13.137 "traddr": "10.0.0.2", 00:16:13.137 "trsvcid": "4420" 00:16:13.137 }, 00:16:13.137 "peer_address": { 00:16:13.137 "trtype": "TCP", 00:16:13.137 "adrfam": "IPv4", 00:16:13.137 "traddr": "10.0.0.1", 00:16:13.138 "trsvcid": "38440" 00:16:13.138 }, 00:16:13.138 "auth": { 00:16:13.138 "state": "completed", 00:16:13.138 "digest": "sha256", 00:16:13.138 "dhgroup": "null" 00:16:13.138 } 00:16:13.138 } 00:16:13.138 ]' 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.138 17:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.397 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.963 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.223 17:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.481 00:16:14.481 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.481 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.481 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.739 { 00:16:14.739 "cntlid": 7, 00:16:14.739 "qid": 0, 00:16:14.739 "state": "enabled", 00:16:14.739 "thread": "nvmf_tgt_poll_group_000", 00:16:14.739 "listen_address": { 00:16:14.739 "trtype": "TCP", 00:16:14.739 "adrfam": "IPv4", 00:16:14.739 "traddr": "10.0.0.2", 00:16:14.739 "trsvcid": "4420" 00:16:14.739 }, 00:16:14.739 "peer_address": { 00:16:14.739 "trtype": "TCP", 00:16:14.739 "adrfam": "IPv4", 00:16:14.739 "traddr": "10.0.0.1", 00:16:14.739 "trsvcid": "38454" 00:16:14.739 }, 00:16:14.739 "auth": { 00:16:14.739 "state": "completed", 00:16:14.739 "digest": "sha256", 00:16:14.739 "dhgroup": "null" 00:16:14.739 } 00:16:14.739 } 00:16:14.739 ]' 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.739 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.997 17:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.562 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.821 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.821 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.080 { 00:16:16.080 "cntlid": 9, 00:16:16.080 "qid": 0, 00:16:16.080 "state": "enabled", 00:16:16.080 "thread": "nvmf_tgt_poll_group_000", 00:16:16.080 "listen_address": { 00:16:16.080 "trtype": "TCP", 00:16:16.080 "adrfam": "IPv4", 00:16:16.080 "traddr": "10.0.0.2", 00:16:16.080 "trsvcid": "4420" 00:16:16.080 }, 00:16:16.080 "peer_address": { 00:16:16.080 "trtype": "TCP", 00:16:16.080 "adrfam": "IPv4", 00:16:16.080 "traddr": "10.0.0.1", 00:16:16.080 "trsvcid": "38466" 00:16:16.080 }, 00:16:16.080 "auth": { 00:16:16.080 "state": "completed", 00:16:16.080 "digest": "sha256", 00:16:16.080 "dhgroup": "ffdhe2048" 00:16:16.080 } 00:16:16.080 } 00:16:16.080 ]' 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.080 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.339 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.339 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.339 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.339 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.339 17:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.339 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.906 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.165 17:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.424 00:16:17.424 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.424 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.424 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.683 { 00:16:17.683 "cntlid": 11, 00:16:17.683 "qid": 0, 00:16:17.683 "state": "enabled", 00:16:17.683 "thread": "nvmf_tgt_poll_group_000", 00:16:17.683 "listen_address": { 00:16:17.683 "trtype": "TCP", 00:16:17.683 "adrfam": "IPv4", 00:16:17.683 "traddr": "10.0.0.2", 00:16:17.683 "trsvcid": "4420" 00:16:17.683 }, 00:16:17.683 "peer_address": { 00:16:17.683 "trtype": "TCP", 00:16:17.683 "adrfam": "IPv4", 00:16:17.683 "traddr": "10.0.0.1", 00:16:17.683 "trsvcid": "38500" 00:16:17.683 }, 00:16:17.683 "auth": { 00:16:17.683 "state": "completed", 00:16:17.683 "digest": "sha256", 00:16:17.683 "dhgroup": "ffdhe2048" 00:16:17.683 } 00:16:17.683 } 00:16:17.683 ]' 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.683 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.942 17:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.508 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.766 00:16:18.766 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.766 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.766 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.024 { 00:16:19.024 "cntlid": 13, 00:16:19.024 "qid": 0, 00:16:19.024 "state": "enabled", 00:16:19.024 "thread": "nvmf_tgt_poll_group_000", 00:16:19.024 "listen_address": { 00:16:19.024 "trtype": "TCP", 00:16:19.024 "adrfam": "IPv4", 00:16:19.024 "traddr": "10.0.0.2", 00:16:19.024 "trsvcid": "4420" 00:16:19.024 }, 00:16:19.024 "peer_address": { 00:16:19.024 "trtype": "TCP", 00:16:19.024 "adrfam": "IPv4", 00:16:19.024 "traddr": "10.0.0.1", 00:16:19.024 "trsvcid": "37022" 00:16:19.024 }, 00:16:19.024 "auth": { 00:16:19.024 "state": "completed", 00:16:19.024 "digest": "sha256", 00:16:19.024 "dhgroup": "ffdhe2048" 00:16:19.024 } 00:16:19.024 } 00:16:19.024 ]' 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.024 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.282 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.283 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.283 17:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.283 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.849 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.108 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.366 00:16:20.366 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.366 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.366 17:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.623 { 00:16:20.623 "cntlid": 15, 00:16:20.623 "qid": 0, 00:16:20.623 "state": "enabled", 00:16:20.623 "thread": "nvmf_tgt_poll_group_000", 00:16:20.623 "listen_address": { 00:16:20.623 "trtype": "TCP", 00:16:20.623 "adrfam": "IPv4", 00:16:20.623 "traddr": "10.0.0.2", 00:16:20.623 "trsvcid": "4420" 00:16:20.623 }, 00:16:20.623 "peer_address": { 00:16:20.623 "trtype": "TCP", 00:16:20.623 "adrfam": "IPv4", 00:16:20.623 "traddr": "10.0.0.1", 00:16:20.623 "trsvcid": "37046" 00:16:20.623 }, 00:16:20.623 "auth": { 00:16:20.623 "state": "completed", 00:16:20.623 "digest": "sha256", 00:16:20.623 "dhgroup": "ffdhe2048" 00:16:20.623 } 00:16:20.623 } 00:16:20.623 ]' 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.623 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.881 17:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.446 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.447 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.447 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.705 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.964 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.964 { 00:16:21.964 "cntlid": 17, 00:16:21.964 "qid": 0, 00:16:21.964 "state": "enabled", 00:16:21.964 "thread": "nvmf_tgt_poll_group_000", 00:16:21.964 "listen_address": { 00:16:21.964 "trtype": "TCP", 00:16:21.964 "adrfam": "IPv4", 00:16:21.964 "traddr": "10.0.0.2", 00:16:21.964 "trsvcid": "4420" 00:16:21.964 }, 00:16:21.964 "peer_address": { 00:16:21.964 "trtype": "TCP", 00:16:21.964 "adrfam": "IPv4", 00:16:21.964 "traddr": "10.0.0.1", 00:16:21.964 "trsvcid": "37080" 00:16:21.964 }, 00:16:21.964 "auth": { 00:16:21.964 "state": "completed", 00:16:21.964 "digest": "sha256", 00:16:21.964 "dhgroup": "ffdhe3072" 00:16:21.964 } 00:16:21.964 } 00:16:21.964 ]' 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.964 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.223 17:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:22.789 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.048 17:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.306 00:16:23.306 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.306 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.306 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.564 { 00:16:23.564 "cntlid": 19, 00:16:23.564 "qid": 0, 00:16:23.564 "state": "enabled", 00:16:23.564 "thread": "nvmf_tgt_poll_group_000", 00:16:23.564 "listen_address": { 00:16:23.564 "trtype": "TCP", 00:16:23.564 "adrfam": "IPv4", 00:16:23.564 "traddr": "10.0.0.2", 00:16:23.564 "trsvcid": "4420" 00:16:23.564 }, 00:16:23.564 "peer_address": { 00:16:23.564 "trtype": "TCP", 00:16:23.564 "adrfam": "IPv4", 00:16:23.564 "traddr": "10.0.0.1", 00:16:23.564 "trsvcid": "37120" 00:16:23.564 }, 00:16:23.564 "auth": { 00:16:23.564 "state": "completed", 00:16:23.564 "digest": "sha256", 00:16:23.564 "dhgroup": "ffdhe3072" 00:16:23.564 } 00:16:23.564 } 00:16:23.564 ]' 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.564 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.565 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.565 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.565 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.565 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.823 17:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.389 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.647 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.904 00:16:24.904 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.904 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.904 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.162 { 00:16:25.162 "cntlid": 21, 00:16:25.162 "qid": 0, 00:16:25.162 "state": "enabled", 00:16:25.162 "thread": "nvmf_tgt_poll_group_000", 00:16:25.162 "listen_address": { 00:16:25.162 "trtype": "TCP", 00:16:25.162 "adrfam": "IPv4", 00:16:25.162 "traddr": "10.0.0.2", 00:16:25.162 "trsvcid": "4420" 00:16:25.162 }, 00:16:25.162 "peer_address": { 00:16:25.162 "trtype": "TCP", 00:16:25.162 "adrfam": "IPv4", 00:16:25.162 "traddr": "10.0.0.1", 00:16:25.162 "trsvcid": "37148" 00:16:25.162 }, 00:16:25.162 "auth": { 00:16:25.162 "state": "completed", 00:16:25.162 "digest": "sha256", 00:16:25.162 "dhgroup": "ffdhe3072" 00:16:25.162 } 00:16:25.162 } 00:16:25.162 ]' 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.162 17:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.420 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.986 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.245 17:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.245 00:16:26.245 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.245 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.504 { 00:16:26.504 "cntlid": 23, 00:16:26.504 "qid": 0, 00:16:26.504 "state": "enabled", 00:16:26.504 "thread": "nvmf_tgt_poll_group_000", 00:16:26.504 "listen_address": { 00:16:26.504 "trtype": "TCP", 00:16:26.504 "adrfam": "IPv4", 00:16:26.504 "traddr": "10.0.0.2", 00:16:26.504 "trsvcid": "4420" 00:16:26.504 }, 00:16:26.504 "peer_address": { 00:16:26.504 "trtype": "TCP", 00:16:26.504 "adrfam": "IPv4", 00:16:26.504 "traddr": "10.0.0.1", 00:16:26.504 "trsvcid": "37172" 00:16:26.504 }, 00:16:26.504 "auth": { 00:16:26.504 "state": "completed", 00:16:26.504 "digest": "sha256", 00:16:26.504 "dhgroup": "ffdhe3072" 00:16:26.504 } 00:16:26.504 } 00:16:26.504 ]' 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.504 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.762 17:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.328 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.587 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.845 00:16:27.845 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.845 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.845 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.117 { 00:16:28.117 "cntlid": 25, 00:16:28.117 "qid": 0, 00:16:28.117 "state": "enabled", 00:16:28.117 "thread": "nvmf_tgt_poll_group_000", 00:16:28.117 "listen_address": { 00:16:28.117 "trtype": "TCP", 00:16:28.117 "adrfam": "IPv4", 00:16:28.117 "traddr": "10.0.0.2", 00:16:28.117 "trsvcid": "4420" 00:16:28.117 }, 00:16:28.117 "peer_address": { 00:16:28.117 "trtype": "TCP", 00:16:28.117 "adrfam": "IPv4", 00:16:28.117 "traddr": "10.0.0.1", 00:16:28.117 "trsvcid": "50142" 00:16:28.117 }, 00:16:28.117 "auth": { 00:16:28.117 "state": "completed", 00:16:28.117 "digest": "sha256", 00:16:28.117 "dhgroup": "ffdhe4096" 00:16:28.117 } 00:16:28.117 } 00:16:28.117 ]' 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.117 17:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.383 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.945 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.202 17:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.459 00:16:29.459 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.459 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.459 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.716 { 00:16:29.716 "cntlid": 27, 00:16:29.716 "qid": 0, 00:16:29.716 "state": "enabled", 00:16:29.716 "thread": "nvmf_tgt_poll_group_000", 00:16:29.716 "listen_address": { 00:16:29.716 "trtype": "TCP", 00:16:29.716 "adrfam": "IPv4", 00:16:29.716 "traddr": "10.0.0.2", 00:16:29.716 "trsvcid": "4420" 00:16:29.716 }, 00:16:29.716 "peer_address": { 00:16:29.716 "trtype": "TCP", 00:16:29.716 "adrfam": "IPv4", 00:16:29.716 "traddr": "10.0.0.1", 00:16:29.716 "trsvcid": "50172" 00:16:29.716 }, 00:16:29.716 "auth": { 00:16:29.716 "state": "completed", 00:16:29.716 "digest": "sha256", 00:16:29.716 "dhgroup": "ffdhe4096" 00:16:29.716 } 00:16:29.716 } 00:16:29.716 ]' 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.716 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.974 17:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.539 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.796 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.054 00:16:31.054 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.054 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.054 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.312 { 00:16:31.312 "cntlid": 29, 00:16:31.312 "qid": 0, 00:16:31.312 "state": "enabled", 00:16:31.312 "thread": "nvmf_tgt_poll_group_000", 00:16:31.312 "listen_address": { 00:16:31.312 "trtype": "TCP", 00:16:31.312 "adrfam": "IPv4", 00:16:31.312 "traddr": "10.0.0.2", 00:16:31.312 "trsvcid": "4420" 00:16:31.312 }, 00:16:31.312 "peer_address": { 00:16:31.312 "trtype": "TCP", 00:16:31.312 "adrfam": "IPv4", 00:16:31.312 "traddr": "10.0.0.1", 00:16:31.312 "trsvcid": "50198" 00:16:31.312 }, 00:16:31.312 "auth": { 00:16:31.312 "state": "completed", 00:16:31.312 "digest": "sha256", 00:16:31.312 "dhgroup": "ffdhe4096" 00:16:31.312 } 00:16:31.312 } 00:16:31.312 ]' 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.312 17:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.569 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.134 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.392 17:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.393 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.393 17:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.650 00:16:32.650 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.650 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.650 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.651 { 00:16:32.651 "cntlid": 31, 00:16:32.651 "qid": 0, 00:16:32.651 "state": "enabled", 00:16:32.651 "thread": "nvmf_tgt_poll_group_000", 00:16:32.651 "listen_address": { 00:16:32.651 "trtype": "TCP", 00:16:32.651 "adrfam": "IPv4", 00:16:32.651 "traddr": "10.0.0.2", 00:16:32.651 "trsvcid": "4420" 00:16:32.651 }, 00:16:32.651 "peer_address": { 00:16:32.651 "trtype": "TCP", 00:16:32.651 "adrfam": "IPv4", 00:16:32.651 "traddr": "10.0.0.1", 00:16:32.651 "trsvcid": "50210" 00:16:32.651 }, 00:16:32.651 "auth": { 00:16:32.651 "state": "completed", 00:16:32.651 "digest": "sha256", 00:16:32.651 "dhgroup": "ffdhe4096" 00:16:32.651 } 00:16:32.651 } 00:16:32.651 ]' 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.651 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.908 17:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:33.474 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.732 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.733 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.298 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.298 { 00:16:34.298 "cntlid": 33, 00:16:34.298 "qid": 0, 00:16:34.298 "state": "enabled", 00:16:34.298 "thread": "nvmf_tgt_poll_group_000", 00:16:34.298 "listen_address": { 00:16:34.298 "trtype": "TCP", 00:16:34.298 "adrfam": "IPv4", 00:16:34.298 "traddr": "10.0.0.2", 00:16:34.298 "trsvcid": "4420" 00:16:34.298 }, 00:16:34.298 "peer_address": { 00:16:34.298 "trtype": "TCP", 00:16:34.298 "adrfam": "IPv4", 00:16:34.298 "traddr": "10.0.0.1", 00:16:34.298 "trsvcid": "50236" 00:16:34.298 }, 00:16:34.298 "auth": { 00:16:34.298 "state": "completed", 00:16:34.298 "digest": "sha256", 00:16:34.298 "dhgroup": "ffdhe6144" 00:16:34.298 } 00:16:34.298 } 00:16:34.298 ]' 00:16:34.298 17:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.298 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.298 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.298 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.298 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.556 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.556 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.556 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.556 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.122 17:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.380 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.638 00:16:35.638 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.638 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.638 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.895 { 00:16:35.895 "cntlid": 35, 00:16:35.895 "qid": 0, 00:16:35.895 "state": "enabled", 00:16:35.895 "thread": "nvmf_tgt_poll_group_000", 00:16:35.895 "listen_address": { 00:16:35.895 "trtype": "TCP", 00:16:35.895 "adrfam": "IPv4", 00:16:35.895 "traddr": "10.0.0.2", 00:16:35.895 "trsvcid": "4420" 00:16:35.895 }, 00:16:35.895 "peer_address": { 00:16:35.895 "trtype": "TCP", 00:16:35.895 "adrfam": "IPv4", 00:16:35.895 "traddr": "10.0.0.1", 00:16:35.895 "trsvcid": "50258" 00:16:35.895 }, 00:16:35.895 "auth": { 00:16:35.895 "state": "completed", 00:16:35.895 "digest": "sha256", 00:16:35.895 "dhgroup": "ffdhe6144" 00:16:35.895 } 00:16:35.895 } 00:16:35.895 ]' 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.895 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.153 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.153 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.153 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.153 17:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.720 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.978 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.979 17:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.237 00:16:37.237 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.237 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.237 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.495 { 00:16:37.495 "cntlid": 37, 00:16:37.495 "qid": 0, 00:16:37.495 "state": "enabled", 00:16:37.495 "thread": "nvmf_tgt_poll_group_000", 00:16:37.495 "listen_address": { 00:16:37.495 "trtype": "TCP", 00:16:37.495 "adrfam": "IPv4", 00:16:37.495 "traddr": "10.0.0.2", 00:16:37.495 "trsvcid": "4420" 00:16:37.495 }, 00:16:37.495 "peer_address": { 00:16:37.495 "trtype": "TCP", 00:16:37.495 "adrfam": "IPv4", 00:16:37.495 "traddr": "10.0.0.1", 00:16:37.495 "trsvcid": "50276" 00:16:37.495 }, 00:16:37.495 "auth": { 00:16:37.495 "state": "completed", 00:16:37.495 "digest": "sha256", 00:16:37.495 "dhgroup": "ffdhe6144" 00:16:37.495 } 00:16:37.495 } 00:16:37.495 ]' 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.495 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.754 17:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.320 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.579 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.837 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.095 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.095 { 00:16:39.095 "cntlid": 39, 00:16:39.095 "qid": 0, 00:16:39.095 "state": "enabled", 00:16:39.095 "thread": "nvmf_tgt_poll_group_000", 00:16:39.095 "listen_address": { 00:16:39.096 "trtype": "TCP", 00:16:39.096 "adrfam": "IPv4", 00:16:39.096 "traddr": "10.0.0.2", 00:16:39.096 "trsvcid": "4420" 00:16:39.096 }, 00:16:39.096 "peer_address": { 00:16:39.096 "trtype": "TCP", 00:16:39.096 "adrfam": "IPv4", 00:16:39.096 "traddr": "10.0.0.1", 00:16:39.096 "trsvcid": "59978" 00:16:39.096 }, 00:16:39.096 "auth": { 00:16:39.096 "state": "completed", 00:16:39.096 "digest": "sha256", 00:16:39.096 "dhgroup": "ffdhe6144" 00:16:39.096 } 00:16:39.096 } 00:16:39.096 ]' 00:16:39.096 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.096 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.096 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.354 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.354 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.354 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.354 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.354 17:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.354 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.921 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.179 17:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.746 00:16:40.746 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.746 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.746 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.746 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.004 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.004 17:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.004 17:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.004 17:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.004 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.004 { 00:16:41.005 "cntlid": 41, 00:16:41.005 "qid": 0, 00:16:41.005 "state": "enabled", 00:16:41.005 "thread": "nvmf_tgt_poll_group_000", 00:16:41.005 "listen_address": { 00:16:41.005 "trtype": "TCP", 00:16:41.005 "adrfam": "IPv4", 00:16:41.005 "traddr": "10.0.0.2", 00:16:41.005 "trsvcid": "4420" 00:16:41.005 }, 00:16:41.005 "peer_address": { 00:16:41.005 "trtype": "TCP", 00:16:41.005 "adrfam": "IPv4", 00:16:41.005 "traddr": "10.0.0.1", 00:16:41.005 "trsvcid": "59988" 00:16:41.005 }, 00:16:41.005 "auth": { 00:16:41.005 "state": "completed", 00:16:41.005 "digest": "sha256", 00:16:41.005 "dhgroup": "ffdhe8192" 00:16:41.005 } 00:16:41.005 } 00:16:41.005 ]' 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.005 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.263 17:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.830 17:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.396 00:16:42.396 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.396 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.396 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.655 { 00:16:42.655 "cntlid": 43, 00:16:42.655 "qid": 0, 00:16:42.655 "state": "enabled", 00:16:42.655 "thread": "nvmf_tgt_poll_group_000", 00:16:42.655 "listen_address": { 00:16:42.655 "trtype": "TCP", 00:16:42.655 "adrfam": "IPv4", 00:16:42.655 "traddr": "10.0.0.2", 00:16:42.655 "trsvcid": "4420" 00:16:42.655 }, 00:16:42.655 "peer_address": { 00:16:42.655 "trtype": "TCP", 00:16:42.655 "adrfam": "IPv4", 00:16:42.655 "traddr": "10.0.0.1", 00:16:42.655 "trsvcid": "60008" 00:16:42.655 }, 00:16:42.655 "auth": { 00:16:42.655 "state": "completed", 00:16:42.655 "digest": "sha256", 00:16:42.655 "dhgroup": "ffdhe8192" 00:16:42.655 } 00:16:42.655 } 00:16:42.655 ]' 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.655 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.913 17:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.480 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.738 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.304 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.304 17:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.304 { 00:16:44.304 "cntlid": 45, 00:16:44.304 "qid": 0, 00:16:44.304 "state": "enabled", 00:16:44.304 "thread": "nvmf_tgt_poll_group_000", 00:16:44.304 "listen_address": { 00:16:44.304 "trtype": "TCP", 00:16:44.304 "adrfam": "IPv4", 00:16:44.304 "traddr": "10.0.0.2", 00:16:44.304 "trsvcid": "4420" 00:16:44.304 }, 00:16:44.304 "peer_address": { 00:16:44.304 "trtype": "TCP", 00:16:44.304 "adrfam": "IPv4", 00:16:44.304 "traddr": "10.0.0.1", 00:16:44.304 "trsvcid": "60026" 00:16:44.304 }, 00:16:44.304 "auth": { 00:16:44.304 "state": "completed", 00:16:44.304 "digest": "sha256", 00:16:44.304 "dhgroup": "ffdhe8192" 00:16:44.304 } 00:16:44.304 } 00:16:44.304 ]' 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.304 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.562 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.158 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.159 17:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.417 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.984 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.984 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.243 { 00:16:46.243 "cntlid": 47, 00:16:46.243 "qid": 0, 00:16:46.243 "state": "enabled", 00:16:46.243 "thread": "nvmf_tgt_poll_group_000", 00:16:46.243 "listen_address": { 00:16:46.243 "trtype": "TCP", 00:16:46.243 "adrfam": "IPv4", 00:16:46.243 "traddr": "10.0.0.2", 00:16:46.243 "trsvcid": "4420" 00:16:46.243 }, 00:16:46.243 "peer_address": { 00:16:46.243 "trtype": "TCP", 00:16:46.243 "adrfam": "IPv4", 00:16:46.243 "traddr": "10.0.0.1", 00:16:46.243 "trsvcid": "60054" 00:16:46.243 }, 00:16:46.243 "auth": { 00:16:46.243 "state": "completed", 00:16:46.243 "digest": "sha256", 00:16:46.243 "dhgroup": "ffdhe8192" 00:16:46.243 } 00:16:46.243 } 00:16:46.243 ]' 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.243 17:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.501 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.068 17:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.326 00:16:47.326 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.326 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.326 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.585 { 00:16:47.585 "cntlid": 49, 00:16:47.585 "qid": 0, 00:16:47.585 "state": "enabled", 00:16:47.585 "thread": "nvmf_tgt_poll_group_000", 00:16:47.585 "listen_address": { 00:16:47.585 "trtype": "TCP", 00:16:47.585 "adrfam": "IPv4", 00:16:47.585 "traddr": "10.0.0.2", 00:16:47.585 "trsvcid": "4420" 00:16:47.585 }, 00:16:47.585 "peer_address": { 00:16:47.585 "trtype": "TCP", 00:16:47.585 "adrfam": "IPv4", 00:16:47.585 "traddr": "10.0.0.1", 00:16:47.585 "trsvcid": "60076" 00:16:47.585 }, 00:16:47.585 "auth": { 00:16:47.585 "state": "completed", 00:16:47.585 "digest": "sha384", 00:16:47.585 "dhgroup": "null" 00:16:47.585 } 00:16:47.585 } 00:16:47.585 ]' 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.585 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.843 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.843 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.843 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.843 17:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.409 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.668 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.926 00:16:48.926 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.926 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.926 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.926 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.184 { 00:16:49.184 "cntlid": 51, 00:16:49.184 "qid": 0, 00:16:49.184 "state": "enabled", 00:16:49.184 "thread": "nvmf_tgt_poll_group_000", 00:16:49.184 "listen_address": { 00:16:49.184 "trtype": "TCP", 00:16:49.184 "adrfam": "IPv4", 00:16:49.184 "traddr": "10.0.0.2", 00:16:49.184 "trsvcid": "4420" 00:16:49.184 }, 00:16:49.184 "peer_address": { 00:16:49.184 "trtype": "TCP", 00:16:49.184 "adrfam": "IPv4", 00:16:49.184 "traddr": "10.0.0.1", 00:16:49.184 "trsvcid": "46800" 00:16:49.184 }, 00:16:49.184 "auth": { 00:16:49.184 "state": "completed", 00:16:49.184 "digest": "sha384", 00:16:49.184 "dhgroup": "null" 00:16:49.184 } 00:16:49.184 } 00:16:49.184 ]' 00:16:49.184 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.185 17:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.443 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.009 17:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.267 00:16:50.267 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.267 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.267 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.526 { 00:16:50.526 "cntlid": 53, 00:16:50.526 "qid": 0, 00:16:50.526 "state": "enabled", 00:16:50.526 "thread": "nvmf_tgt_poll_group_000", 00:16:50.526 "listen_address": { 00:16:50.526 "trtype": "TCP", 00:16:50.526 "adrfam": "IPv4", 00:16:50.526 "traddr": "10.0.0.2", 00:16:50.526 "trsvcid": "4420" 00:16:50.526 }, 00:16:50.526 "peer_address": { 00:16:50.526 "trtype": "TCP", 00:16:50.526 "adrfam": "IPv4", 00:16:50.526 "traddr": "10.0.0.1", 00:16:50.526 "trsvcid": "46826" 00:16:50.526 }, 00:16:50.526 "auth": { 00:16:50.526 "state": "completed", 00:16:50.526 "digest": "sha384", 00:16:50.526 "dhgroup": "null" 00:16:50.526 } 00:16:50.526 } 00:16:50.526 ]' 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.526 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.784 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.784 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.784 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.784 17:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.351 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.609 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.610 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.868 00:16:51.868 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.868 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.868 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.126 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.126 { 00:16:52.126 "cntlid": 55, 00:16:52.126 "qid": 0, 00:16:52.126 "state": "enabled", 00:16:52.126 "thread": "nvmf_tgt_poll_group_000", 00:16:52.126 "listen_address": { 00:16:52.126 "trtype": "TCP", 00:16:52.126 "adrfam": "IPv4", 00:16:52.126 "traddr": "10.0.0.2", 00:16:52.126 "trsvcid": "4420" 00:16:52.126 }, 00:16:52.126 "peer_address": { 00:16:52.126 "trtype": "TCP", 00:16:52.126 "adrfam": "IPv4", 00:16:52.126 "traddr": "10.0.0.1", 00:16:52.126 "trsvcid": "46854" 00:16:52.126 }, 00:16:52.126 "auth": { 00:16:52.126 "state": "completed", 00:16:52.126 "digest": "sha384", 00:16:52.127 "dhgroup": "null" 00:16:52.127 } 00:16:52.127 } 00:16:52.127 ]' 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.127 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.385 17:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.952 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.211 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.211 00:16:53.469 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.469 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.469 17:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.469 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.469 { 00:16:53.470 "cntlid": 57, 00:16:53.470 "qid": 0, 00:16:53.470 "state": "enabled", 00:16:53.470 "thread": "nvmf_tgt_poll_group_000", 00:16:53.470 "listen_address": { 00:16:53.470 "trtype": "TCP", 00:16:53.470 "adrfam": "IPv4", 00:16:53.470 "traddr": "10.0.0.2", 00:16:53.470 "trsvcid": "4420" 00:16:53.470 }, 00:16:53.470 "peer_address": { 00:16:53.470 "trtype": "TCP", 00:16:53.470 "adrfam": "IPv4", 00:16:53.470 "traddr": "10.0.0.1", 00:16:53.470 "trsvcid": "46878" 00:16:53.470 }, 00:16:53.470 "auth": { 00:16:53.470 "state": "completed", 00:16:53.470 "digest": "sha384", 00:16:53.470 "dhgroup": "ffdhe2048" 00:16:53.470 } 00:16:53.470 } 00:16:53.470 ]' 00:16:53.470 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.470 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.470 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.728 17:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.294 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.551 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.809 00:16:54.809 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.809 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.809 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.067 { 00:16:55.067 "cntlid": 59, 00:16:55.067 "qid": 0, 00:16:55.067 "state": "enabled", 00:16:55.067 "thread": "nvmf_tgt_poll_group_000", 00:16:55.067 "listen_address": { 00:16:55.067 "trtype": "TCP", 00:16:55.067 "adrfam": "IPv4", 00:16:55.067 "traddr": "10.0.0.2", 00:16:55.067 "trsvcid": "4420" 00:16:55.067 }, 00:16:55.067 "peer_address": { 00:16:55.067 "trtype": "TCP", 00:16:55.067 "adrfam": "IPv4", 00:16:55.067 "traddr": "10.0.0.1", 00:16:55.067 "trsvcid": "46914" 00:16:55.067 }, 00:16:55.067 "auth": { 00:16:55.067 "state": "completed", 00:16:55.067 "digest": "sha384", 00:16:55.067 "dhgroup": "ffdhe2048" 00:16:55.067 } 00:16:55.067 } 00:16:55.067 ]' 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.067 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.325 17:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.890 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.149 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.149 00:16:56.407 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.407 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.407 17:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.407 { 00:16:56.407 "cntlid": 61, 00:16:56.407 "qid": 0, 00:16:56.407 "state": "enabled", 00:16:56.407 "thread": "nvmf_tgt_poll_group_000", 00:16:56.407 "listen_address": { 00:16:56.407 "trtype": "TCP", 00:16:56.407 "adrfam": "IPv4", 00:16:56.407 "traddr": "10.0.0.2", 00:16:56.407 "trsvcid": "4420" 00:16:56.407 }, 00:16:56.407 "peer_address": { 00:16:56.407 "trtype": "TCP", 00:16:56.407 "adrfam": "IPv4", 00:16:56.407 "traddr": "10.0.0.1", 00:16:56.407 "trsvcid": "46940" 00:16:56.407 }, 00:16:56.407 "auth": { 00:16:56.407 "state": "completed", 00:16:56.407 "digest": "sha384", 00:16:56.407 "dhgroup": "ffdhe2048" 00:16:56.407 } 00:16:56.407 } 00:16:56.407 ]' 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.407 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.665 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.230 17:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.488 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.746 00:16:57.746 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.746 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.746 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.012 { 00:16:58.012 "cntlid": 63, 00:16:58.012 "qid": 0, 00:16:58.012 "state": "enabled", 00:16:58.012 "thread": "nvmf_tgt_poll_group_000", 00:16:58.012 "listen_address": { 00:16:58.012 "trtype": "TCP", 00:16:58.012 "adrfam": "IPv4", 00:16:58.012 "traddr": "10.0.0.2", 00:16:58.012 "trsvcid": "4420" 00:16:58.012 }, 00:16:58.012 "peer_address": { 00:16:58.012 "trtype": "TCP", 00:16:58.012 "adrfam": "IPv4", 00:16:58.012 "traddr": "10.0.0.1", 00:16:58.012 "trsvcid": "33264" 00:16:58.012 }, 00:16:58.012 "auth": { 00:16:58.012 "state": "completed", 00:16:58.012 "digest": "sha384", 00:16:58.012 "dhgroup": "ffdhe2048" 00:16:58.012 } 00:16:58.012 } 00:16:58.012 ]' 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.012 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.272 17:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.837 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.095 17:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.095 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.095 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.095 00:16:59.353 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.353 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.353 17:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.353 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.353 { 00:16:59.353 "cntlid": 65, 00:16:59.353 "qid": 0, 00:16:59.353 "state": "enabled", 00:16:59.353 "thread": "nvmf_tgt_poll_group_000", 00:16:59.353 "listen_address": { 00:16:59.353 "trtype": "TCP", 00:16:59.353 "adrfam": "IPv4", 00:16:59.353 "traddr": "10.0.0.2", 00:16:59.353 "trsvcid": "4420" 00:16:59.353 }, 00:16:59.354 "peer_address": { 00:16:59.354 "trtype": "TCP", 00:16:59.354 "adrfam": "IPv4", 00:16:59.354 "traddr": "10.0.0.1", 00:16:59.354 "trsvcid": "33286" 00:16:59.354 }, 00:16:59.354 "auth": { 00:16:59.354 "state": "completed", 00:16:59.354 "digest": "sha384", 00:16:59.354 "dhgroup": "ffdhe3072" 00:16:59.354 } 00:16:59.354 } 00:16:59.354 ]' 00:16:59.354 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.354 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.354 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.612 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.177 17:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.435 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.694 00:17:00.694 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.694 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.694 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.952 { 00:17:00.952 "cntlid": 67, 00:17:00.952 "qid": 0, 00:17:00.952 "state": "enabled", 00:17:00.952 "thread": "nvmf_tgt_poll_group_000", 00:17:00.952 "listen_address": { 00:17:00.952 "trtype": "TCP", 00:17:00.952 "adrfam": "IPv4", 00:17:00.952 "traddr": "10.0.0.2", 00:17:00.952 "trsvcid": "4420" 00:17:00.952 }, 00:17:00.952 "peer_address": { 00:17:00.952 "trtype": "TCP", 00:17:00.952 "adrfam": "IPv4", 00:17:00.952 "traddr": "10.0.0.1", 00:17:00.952 "trsvcid": "33326" 00:17:00.952 }, 00:17:00.952 "auth": { 00:17:00.952 "state": "completed", 00:17:00.952 "digest": "sha384", 00:17:00.952 "dhgroup": "ffdhe3072" 00:17:00.952 } 00:17:00.952 } 00:17:00.952 ]' 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.952 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.210 17:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.816 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.074 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.333 00:17:02.333 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.333 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.333 17:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.333 { 00:17:02.333 "cntlid": 69, 00:17:02.333 "qid": 0, 00:17:02.333 "state": "enabled", 00:17:02.333 "thread": "nvmf_tgt_poll_group_000", 00:17:02.333 "listen_address": { 00:17:02.333 "trtype": "TCP", 00:17:02.333 "adrfam": "IPv4", 00:17:02.333 "traddr": "10.0.0.2", 00:17:02.333 "trsvcid": "4420" 00:17:02.333 }, 00:17:02.333 "peer_address": { 00:17:02.333 "trtype": "TCP", 00:17:02.333 "adrfam": "IPv4", 00:17:02.333 "traddr": "10.0.0.1", 00:17:02.333 "trsvcid": "33362" 00:17:02.333 }, 00:17:02.333 "auth": { 00:17:02.333 "state": "completed", 00:17:02.333 "digest": "sha384", 00:17:02.333 "dhgroup": "ffdhe3072" 00:17:02.333 } 00:17:02.333 } 00:17:02.333 ]' 00:17:02.333 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.590 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.846 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.412 17:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.412 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.670 00:17:03.670 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.670 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.670 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.928 { 00:17:03.928 "cntlid": 71, 00:17:03.928 "qid": 0, 00:17:03.928 "state": "enabled", 00:17:03.928 "thread": "nvmf_tgt_poll_group_000", 00:17:03.928 "listen_address": { 00:17:03.928 "trtype": "TCP", 00:17:03.928 "adrfam": "IPv4", 00:17:03.928 "traddr": "10.0.0.2", 00:17:03.928 "trsvcid": "4420" 00:17:03.928 }, 00:17:03.928 "peer_address": { 00:17:03.928 "trtype": "TCP", 00:17:03.928 "adrfam": "IPv4", 00:17:03.928 "traddr": "10.0.0.1", 00:17:03.928 "trsvcid": "33398" 00:17:03.928 }, 00:17:03.928 "auth": { 00:17:03.928 "state": "completed", 00:17:03.928 "digest": "sha384", 00:17:03.928 "dhgroup": "ffdhe3072" 00:17:03.928 } 00:17:03.928 } 00:17:03.928 ]' 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.928 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.241 17:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.805 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.062 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.319 00:17:05.319 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.319 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.319 17:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.319 { 00:17:05.319 "cntlid": 73, 00:17:05.319 "qid": 0, 00:17:05.319 "state": "enabled", 00:17:05.319 "thread": "nvmf_tgt_poll_group_000", 00:17:05.319 "listen_address": { 00:17:05.319 "trtype": "TCP", 00:17:05.319 "adrfam": "IPv4", 00:17:05.319 "traddr": "10.0.0.2", 00:17:05.319 "trsvcid": "4420" 00:17:05.319 }, 00:17:05.319 "peer_address": { 00:17:05.319 "trtype": "TCP", 00:17:05.319 "adrfam": "IPv4", 00:17:05.319 "traddr": "10.0.0.1", 00:17:05.319 "trsvcid": "33420" 00:17:05.319 }, 00:17:05.319 "auth": { 00:17:05.319 "state": "completed", 00:17:05.319 "digest": "sha384", 00:17:05.319 "dhgroup": "ffdhe4096" 00:17:05.319 } 00:17:05.319 } 00:17:05.319 ]' 00:17:05.319 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.576 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.834 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:06.399 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.399 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.400 17:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.400 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.657 00:17:06.657 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.657 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.657 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.915 { 00:17:06.915 "cntlid": 75, 00:17:06.915 "qid": 0, 00:17:06.915 "state": "enabled", 00:17:06.915 "thread": "nvmf_tgt_poll_group_000", 00:17:06.915 "listen_address": { 00:17:06.915 "trtype": "TCP", 00:17:06.915 "adrfam": "IPv4", 00:17:06.915 "traddr": "10.0.0.2", 00:17:06.915 "trsvcid": "4420" 00:17:06.915 }, 00:17:06.915 "peer_address": { 00:17:06.915 "trtype": "TCP", 00:17:06.915 "adrfam": "IPv4", 00:17:06.915 "traddr": "10.0.0.1", 00:17:06.915 "trsvcid": "33454" 00:17:06.915 }, 00:17:06.915 "auth": { 00:17:06.915 "state": "completed", 00:17:06.915 "digest": "sha384", 00:17:06.915 "dhgroup": "ffdhe4096" 00:17:06.915 } 00:17:06.915 } 00:17:06.915 ]' 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.915 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.173 17:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.749 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.007 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.265 00:17:08.265 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.265 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.265 17:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.523 { 00:17:08.523 "cntlid": 77, 00:17:08.523 "qid": 0, 00:17:08.523 "state": "enabled", 00:17:08.523 "thread": "nvmf_tgt_poll_group_000", 00:17:08.523 "listen_address": { 00:17:08.523 "trtype": "TCP", 00:17:08.523 "adrfam": "IPv4", 00:17:08.523 "traddr": "10.0.0.2", 00:17:08.523 "trsvcid": "4420" 00:17:08.523 }, 00:17:08.523 "peer_address": { 00:17:08.523 "trtype": "TCP", 00:17:08.523 "adrfam": "IPv4", 00:17:08.523 "traddr": "10.0.0.1", 00:17:08.523 "trsvcid": "41804" 00:17:08.523 }, 00:17:08.523 "auth": { 00:17:08.523 "state": "completed", 00:17:08.523 "digest": "sha384", 00:17:08.523 "dhgroup": "ffdhe4096" 00:17:08.523 } 00:17:08.523 } 00:17:08.523 ]' 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.523 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.524 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.524 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.524 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.782 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:09.348 17:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.348 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.606 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.864 00:17:09.864 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.864 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.864 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.123 { 00:17:10.123 "cntlid": 79, 00:17:10.123 "qid": 0, 00:17:10.123 "state": "enabled", 00:17:10.123 "thread": "nvmf_tgt_poll_group_000", 00:17:10.123 "listen_address": { 00:17:10.123 "trtype": "TCP", 00:17:10.123 "adrfam": "IPv4", 00:17:10.123 "traddr": "10.0.0.2", 00:17:10.123 "trsvcid": "4420" 00:17:10.123 }, 00:17:10.123 "peer_address": { 00:17:10.123 "trtype": "TCP", 00:17:10.123 "adrfam": "IPv4", 00:17:10.123 "traddr": "10.0.0.1", 00:17:10.123 "trsvcid": "41834" 00:17:10.123 }, 00:17:10.123 "auth": { 00:17:10.123 "state": "completed", 00:17:10.123 "digest": "sha384", 00:17:10.123 "dhgroup": "ffdhe4096" 00:17:10.123 } 00:17:10.123 } 00:17:10.123 ]' 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.123 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.382 17:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.949 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.207 17:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.207 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.207 17:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.466 00:17:11.466 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.466 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.466 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.466 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.724 { 00:17:11.724 "cntlid": 81, 00:17:11.724 "qid": 0, 00:17:11.724 "state": "enabled", 00:17:11.724 "thread": "nvmf_tgt_poll_group_000", 00:17:11.724 "listen_address": { 00:17:11.724 "trtype": "TCP", 00:17:11.724 "adrfam": "IPv4", 00:17:11.724 "traddr": "10.0.0.2", 00:17:11.724 "trsvcid": "4420" 00:17:11.724 }, 00:17:11.724 "peer_address": { 00:17:11.724 "trtype": "TCP", 00:17:11.724 "adrfam": "IPv4", 00:17:11.724 "traddr": "10.0.0.1", 00:17:11.724 "trsvcid": "41846" 00:17:11.724 }, 00:17:11.724 "auth": { 00:17:11.724 "state": "completed", 00:17:11.724 "digest": "sha384", 00:17:11.724 "dhgroup": "ffdhe6144" 00:17:11.724 } 00:17:11.724 } 00:17:11.724 ]' 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.724 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.982 17:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:12.548 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.548 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.548 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.548 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.549 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.807 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.807 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.807 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.807 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.807 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.065 00:17:13.065 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.065 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.065 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.323 { 00:17:13.323 "cntlid": 83, 00:17:13.323 "qid": 0, 00:17:13.323 "state": "enabled", 00:17:13.323 "thread": "nvmf_tgt_poll_group_000", 00:17:13.323 "listen_address": { 00:17:13.323 "trtype": "TCP", 00:17:13.323 "adrfam": "IPv4", 00:17:13.323 "traddr": "10.0.0.2", 00:17:13.323 "trsvcid": "4420" 00:17:13.323 }, 00:17:13.323 "peer_address": { 00:17:13.323 "trtype": "TCP", 00:17:13.323 "adrfam": "IPv4", 00:17:13.323 "traddr": "10.0.0.1", 00:17:13.323 "trsvcid": "41876" 00:17:13.323 }, 00:17:13.323 "auth": { 00:17:13.323 "state": "completed", 00:17:13.323 "digest": "sha384", 00:17:13.323 "dhgroup": "ffdhe6144" 00:17:13.323 } 00:17:13.323 } 00:17:13.323 ]' 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.323 17:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.582 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.149 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.407 17:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.407 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.407 17:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.665 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.665 17:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.923 { 00:17:14.923 "cntlid": 85, 00:17:14.923 "qid": 0, 00:17:14.923 "state": "enabled", 00:17:14.923 "thread": "nvmf_tgt_poll_group_000", 00:17:14.923 "listen_address": { 00:17:14.923 "trtype": "TCP", 00:17:14.923 "adrfam": "IPv4", 00:17:14.923 "traddr": "10.0.0.2", 00:17:14.923 "trsvcid": "4420" 00:17:14.923 }, 00:17:14.923 "peer_address": { 00:17:14.923 "trtype": "TCP", 00:17:14.923 "adrfam": "IPv4", 00:17:14.923 "traddr": "10.0.0.1", 00:17:14.923 "trsvcid": "41910" 00:17:14.923 }, 00:17:14.923 "auth": { 00:17:14.923 "state": "completed", 00:17:14.923 "digest": "sha384", 00:17:14.923 "dhgroup": "ffdhe6144" 00:17:14.923 } 00:17:14.923 } 00:17:14.923 ]' 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.923 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.181 17:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:15.748 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.749 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:16.316 00:17:16.316 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.316 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.316 17:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.316 { 00:17:16.316 "cntlid": 87, 00:17:16.316 "qid": 0, 00:17:16.316 "state": "enabled", 00:17:16.316 "thread": "nvmf_tgt_poll_group_000", 00:17:16.316 "listen_address": { 00:17:16.316 "trtype": "TCP", 00:17:16.316 "adrfam": "IPv4", 00:17:16.316 "traddr": "10.0.0.2", 00:17:16.316 "trsvcid": "4420" 00:17:16.316 }, 00:17:16.316 "peer_address": { 00:17:16.316 "trtype": "TCP", 00:17:16.316 "adrfam": "IPv4", 00:17:16.316 "traddr": "10.0.0.1", 00:17:16.316 "trsvcid": "41938" 00:17:16.316 }, 00:17:16.316 "auth": { 00:17:16.316 "state": "completed", 00:17:16.316 "digest": "sha384", 00:17:16.316 "dhgroup": "ffdhe6144" 00:17:16.316 } 00:17:16.316 } 00:17:16.316 ]' 00:17:16.316 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.574 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.833 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.400 17:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.400 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.967 00:17:17.967 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.967 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.967 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.271 { 00:17:18.271 "cntlid": 89, 00:17:18.271 "qid": 0, 00:17:18.271 "state": "enabled", 00:17:18.271 "thread": "nvmf_tgt_poll_group_000", 00:17:18.271 "listen_address": { 00:17:18.271 "trtype": "TCP", 00:17:18.271 "adrfam": "IPv4", 00:17:18.271 "traddr": "10.0.0.2", 00:17:18.271 "trsvcid": "4420" 00:17:18.271 }, 00:17:18.271 "peer_address": { 00:17:18.271 "trtype": "TCP", 00:17:18.271 "adrfam": "IPv4", 00:17:18.271 "traddr": "10.0.0.1", 00:17:18.271 "trsvcid": "37728" 00:17:18.271 }, 00:17:18.271 "auth": { 00:17:18.271 "state": "completed", 00:17:18.271 "digest": "sha384", 00:17:18.271 "dhgroup": "ffdhe8192" 00:17:18.271 } 00:17:18.271 } 00:17:18.271 ]' 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.271 17:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.547 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.114 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.115 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.115 17:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.115 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.115 17:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.681 00:17:19.681 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.681 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.681 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.939 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.939 { 00:17:19.939 "cntlid": 91, 00:17:19.939 "qid": 0, 00:17:19.939 "state": "enabled", 00:17:19.939 "thread": "nvmf_tgt_poll_group_000", 00:17:19.939 "listen_address": { 00:17:19.939 "trtype": "TCP", 00:17:19.939 "adrfam": "IPv4", 00:17:19.939 "traddr": "10.0.0.2", 00:17:19.939 "trsvcid": "4420" 00:17:19.939 }, 00:17:19.939 "peer_address": { 00:17:19.940 "trtype": "TCP", 00:17:19.940 "adrfam": "IPv4", 00:17:19.940 "traddr": "10.0.0.1", 00:17:19.940 "trsvcid": "37748" 00:17:19.940 }, 00:17:19.940 "auth": { 00:17:19.940 "state": "completed", 00:17:19.940 "digest": "sha384", 00:17:19.940 "dhgroup": "ffdhe8192" 00:17:19.940 } 00:17:19.940 } 00:17:19.940 ]' 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.940 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.198 17:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.765 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.023 17:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.282 00:17:21.282 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.282 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.282 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.540 { 00:17:21.540 "cntlid": 93, 00:17:21.540 "qid": 0, 00:17:21.540 "state": "enabled", 00:17:21.540 "thread": "nvmf_tgt_poll_group_000", 00:17:21.540 "listen_address": { 00:17:21.540 "trtype": "TCP", 00:17:21.540 "adrfam": "IPv4", 00:17:21.540 "traddr": "10.0.0.2", 00:17:21.540 "trsvcid": "4420" 00:17:21.540 }, 00:17:21.540 "peer_address": { 00:17:21.540 "trtype": "TCP", 00:17:21.540 "adrfam": "IPv4", 00:17:21.540 "traddr": "10.0.0.1", 00:17:21.540 "trsvcid": "37780" 00:17:21.540 }, 00:17:21.540 "auth": { 00:17:21.540 "state": "completed", 00:17:21.540 "digest": "sha384", 00:17:21.540 "dhgroup": "ffdhe8192" 00:17:21.540 } 00:17:21.540 } 00:17:21.540 ]' 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.540 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.797 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.797 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.797 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.798 17:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.363 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.622 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.188 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.188 { 00:17:23.188 "cntlid": 95, 00:17:23.188 "qid": 0, 00:17:23.188 "state": "enabled", 00:17:23.188 "thread": "nvmf_tgt_poll_group_000", 00:17:23.188 "listen_address": { 00:17:23.188 "trtype": "TCP", 00:17:23.188 "adrfam": "IPv4", 00:17:23.188 "traddr": "10.0.0.2", 00:17:23.188 "trsvcid": "4420" 00:17:23.188 }, 00:17:23.188 "peer_address": { 00:17:23.188 "trtype": "TCP", 00:17:23.188 "adrfam": "IPv4", 00:17:23.188 "traddr": "10.0.0.1", 00:17:23.188 "trsvcid": "37808" 00:17:23.188 }, 00:17:23.188 "auth": { 00:17:23.188 "state": "completed", 00:17:23.188 "digest": "sha384", 00:17:23.188 "dhgroup": "ffdhe8192" 00:17:23.188 } 00:17:23.188 } 00:17:23.188 ]' 00:17:23.188 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.445 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.445 17:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.445 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.445 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.445 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.445 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.445 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.703 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.269 17:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.269 17:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.269 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.269 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.527 00:17:24.527 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.527 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.527 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.786 { 00:17:24.786 "cntlid": 97, 00:17:24.786 "qid": 0, 00:17:24.786 "state": "enabled", 00:17:24.786 "thread": "nvmf_tgt_poll_group_000", 00:17:24.786 "listen_address": { 00:17:24.786 "trtype": "TCP", 00:17:24.786 "adrfam": "IPv4", 00:17:24.786 "traddr": "10.0.0.2", 00:17:24.786 "trsvcid": "4420" 00:17:24.786 }, 00:17:24.786 "peer_address": { 00:17:24.786 "trtype": "TCP", 00:17:24.786 "adrfam": "IPv4", 00:17:24.786 "traddr": "10.0.0.1", 00:17:24.786 "trsvcid": "37816" 00:17:24.786 }, 00:17:24.786 "auth": { 00:17:24.786 "state": "completed", 00:17:24.786 "digest": "sha512", 00:17:24.786 "dhgroup": "null" 00:17:24.786 } 00:17:24.786 } 00:17:24.786 ]' 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.786 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.045 17:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.611 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.869 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.870 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.128 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.128 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.386 { 00:17:26.386 "cntlid": 99, 00:17:26.386 "qid": 0, 00:17:26.386 "state": "enabled", 00:17:26.386 "thread": "nvmf_tgt_poll_group_000", 00:17:26.386 "listen_address": { 00:17:26.386 "trtype": "TCP", 00:17:26.386 "adrfam": "IPv4", 00:17:26.386 "traddr": "10.0.0.2", 00:17:26.386 "trsvcid": "4420" 00:17:26.386 }, 00:17:26.386 "peer_address": { 00:17:26.386 "trtype": "TCP", 00:17:26.386 "adrfam": "IPv4", 00:17:26.386 "traddr": "10.0.0.1", 00:17:26.386 "trsvcid": "37830" 00:17:26.386 }, 00:17:26.386 "auth": { 00:17:26.386 "state": "completed", 00:17:26.386 "digest": "sha512", 00:17:26.386 "dhgroup": "null" 00:17:26.386 } 00:17:26.386 } 00:17:26.386 ]' 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.386 17:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.386 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.386 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.386 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.644 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.211 17:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.469 00:17:27.469 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.469 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.469 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.727 { 00:17:27.727 "cntlid": 101, 00:17:27.727 "qid": 0, 00:17:27.727 "state": "enabled", 00:17:27.727 "thread": "nvmf_tgt_poll_group_000", 00:17:27.727 "listen_address": { 00:17:27.727 "trtype": "TCP", 00:17:27.727 "adrfam": "IPv4", 00:17:27.727 "traddr": "10.0.0.2", 00:17:27.727 "trsvcid": "4420" 00:17:27.727 }, 00:17:27.727 "peer_address": { 00:17:27.727 "trtype": "TCP", 00:17:27.727 "adrfam": "IPv4", 00:17:27.727 "traddr": "10.0.0.1", 00:17:27.727 "trsvcid": "58386" 00:17:27.727 }, 00:17:27.727 "auth": { 00:17:27.727 "state": "completed", 00:17:27.727 "digest": "sha512", 00:17:27.727 "dhgroup": "null" 00:17:27.727 } 00:17:27.727 } 00:17:27.727 ]' 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.727 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.986 17:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.552 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.811 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.069 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.069 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.327 { 00:17:29.327 "cntlid": 103, 00:17:29.327 "qid": 0, 00:17:29.327 "state": "enabled", 00:17:29.327 "thread": "nvmf_tgt_poll_group_000", 00:17:29.327 "listen_address": { 00:17:29.327 "trtype": "TCP", 00:17:29.327 "adrfam": "IPv4", 00:17:29.327 "traddr": "10.0.0.2", 00:17:29.327 "trsvcid": "4420" 00:17:29.327 }, 00:17:29.327 "peer_address": { 00:17:29.327 "trtype": "TCP", 00:17:29.327 "adrfam": "IPv4", 00:17:29.327 "traddr": "10.0.0.1", 00:17:29.327 "trsvcid": "58412" 00:17:29.327 }, 00:17:29.327 "auth": { 00:17:29.327 "state": "completed", 00:17:29.327 "digest": "sha512", 00:17:29.327 "dhgroup": "null" 00:17:29.327 } 00:17:29.327 } 00:17:29.327 ]' 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.327 17:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.585 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.152 17:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.410 00:17:30.410 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.410 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.410 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.668 { 00:17:30.668 "cntlid": 105, 00:17:30.668 "qid": 0, 00:17:30.668 "state": "enabled", 00:17:30.668 "thread": "nvmf_tgt_poll_group_000", 00:17:30.668 "listen_address": { 00:17:30.668 "trtype": "TCP", 00:17:30.668 "adrfam": "IPv4", 00:17:30.668 "traddr": "10.0.0.2", 00:17:30.668 "trsvcid": "4420" 00:17:30.668 }, 00:17:30.668 "peer_address": { 00:17:30.668 "trtype": "TCP", 00:17:30.668 "adrfam": "IPv4", 00:17:30.668 "traddr": "10.0.0.1", 00:17:30.668 "trsvcid": "58432" 00:17:30.668 }, 00:17:30.668 "auth": { 00:17:30.668 "state": "completed", 00:17:30.668 "digest": "sha512", 00:17:30.668 "dhgroup": "ffdhe2048" 00:17:30.668 } 00:17:30.668 } 00:17:30.668 ]' 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.668 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.926 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.926 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.926 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.927 17:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.492 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.750 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.008 00:17:32.008 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.008 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.008 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.267 { 00:17:32.267 "cntlid": 107, 00:17:32.267 "qid": 0, 00:17:32.267 "state": "enabled", 00:17:32.267 "thread": "nvmf_tgt_poll_group_000", 00:17:32.267 "listen_address": { 00:17:32.267 "trtype": "TCP", 00:17:32.267 "adrfam": "IPv4", 00:17:32.267 "traddr": "10.0.0.2", 00:17:32.267 "trsvcid": "4420" 00:17:32.267 }, 00:17:32.267 "peer_address": { 00:17:32.267 "trtype": "TCP", 00:17:32.267 "adrfam": "IPv4", 00:17:32.267 "traddr": "10.0.0.1", 00:17:32.267 "trsvcid": "58446" 00:17:32.267 }, 00:17:32.267 "auth": { 00:17:32.267 "state": "completed", 00:17:32.267 "digest": "sha512", 00:17:32.267 "dhgroup": "ffdhe2048" 00:17:32.267 } 00:17:32.267 } 00:17:32.267 ]' 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.267 17:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.525 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.092 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.351 17:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.351 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.610 { 00:17:33.610 "cntlid": 109, 00:17:33.610 "qid": 0, 00:17:33.610 "state": "enabled", 00:17:33.610 "thread": "nvmf_tgt_poll_group_000", 00:17:33.610 "listen_address": { 00:17:33.610 "trtype": "TCP", 00:17:33.610 "adrfam": "IPv4", 00:17:33.610 "traddr": "10.0.0.2", 00:17:33.610 "trsvcid": "4420" 00:17:33.610 }, 00:17:33.610 "peer_address": { 00:17:33.610 "trtype": "TCP", 00:17:33.610 "adrfam": "IPv4", 00:17:33.610 "traddr": "10.0.0.1", 00:17:33.610 "trsvcid": "58472" 00:17:33.610 }, 00:17:33.610 "auth": { 00:17:33.610 "state": "completed", 00:17:33.610 "digest": "sha512", 00:17:33.610 "dhgroup": "ffdhe2048" 00:17:33.610 } 00:17:33.610 } 00:17:33.610 ]' 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.610 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.868 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.868 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.868 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.868 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.868 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.127 17:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.693 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.985 00:17:34.985 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.985 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.985 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.242 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.242 { 00:17:35.242 "cntlid": 111, 00:17:35.242 "qid": 0, 00:17:35.242 "state": "enabled", 00:17:35.242 "thread": "nvmf_tgt_poll_group_000", 00:17:35.242 "listen_address": { 00:17:35.242 "trtype": "TCP", 00:17:35.242 "adrfam": "IPv4", 00:17:35.242 "traddr": "10.0.0.2", 00:17:35.242 "trsvcid": "4420" 00:17:35.242 }, 00:17:35.243 "peer_address": { 00:17:35.243 "trtype": "TCP", 00:17:35.243 "adrfam": "IPv4", 00:17:35.243 "traddr": "10.0.0.1", 00:17:35.243 "trsvcid": "58508" 00:17:35.243 }, 00:17:35.243 "auth": { 00:17:35.243 "state": "completed", 00:17:35.243 "digest": "sha512", 00:17:35.243 "dhgroup": "ffdhe2048" 00:17:35.243 } 00:17:35.243 } 00:17:35.243 ]' 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.243 17:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.243 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.243 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.500 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.064 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.321 17:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.579 00:17:36.579 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.579 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.579 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.836 { 00:17:36.836 "cntlid": 113, 00:17:36.836 "qid": 0, 00:17:36.836 "state": "enabled", 00:17:36.836 "thread": "nvmf_tgt_poll_group_000", 00:17:36.836 "listen_address": { 00:17:36.836 "trtype": "TCP", 00:17:36.836 "adrfam": "IPv4", 00:17:36.836 "traddr": "10.0.0.2", 00:17:36.836 "trsvcid": "4420" 00:17:36.836 }, 00:17:36.836 "peer_address": { 00:17:36.836 "trtype": "TCP", 00:17:36.836 "adrfam": "IPv4", 00:17:36.836 "traddr": "10.0.0.1", 00:17:36.836 "trsvcid": "58546" 00:17:36.836 }, 00:17:36.836 "auth": { 00:17:36.836 "state": "completed", 00:17:36.836 "digest": "sha512", 00:17:36.836 "dhgroup": "ffdhe3072" 00:17:36.836 } 00:17:36.836 } 00:17:36.836 ]' 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.836 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.093 17:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.657 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.658 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.915 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.172 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.172 { 00:17:38.172 "cntlid": 115, 00:17:38.172 "qid": 0, 00:17:38.172 "state": "enabled", 00:17:38.172 "thread": "nvmf_tgt_poll_group_000", 00:17:38.172 "listen_address": { 00:17:38.172 "trtype": "TCP", 00:17:38.172 "adrfam": "IPv4", 00:17:38.172 "traddr": "10.0.0.2", 00:17:38.172 "trsvcid": "4420" 00:17:38.172 }, 00:17:38.172 "peer_address": { 00:17:38.172 "trtype": "TCP", 00:17:38.172 "adrfam": "IPv4", 00:17:38.172 "traddr": "10.0.0.1", 00:17:38.172 "trsvcid": "55028" 00:17:38.172 }, 00:17:38.172 "auth": { 00:17:38.172 "state": "completed", 00:17:38.172 "digest": "sha512", 00:17:38.172 "dhgroup": "ffdhe3072" 00:17:38.172 } 00:17:38.172 } 00:17:38.172 ]' 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.172 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.428 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.428 17:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.428 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.428 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.428 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.428 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.992 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.249 17:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.506 00:17:39.506 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.506 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.506 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.764 { 00:17:39.764 "cntlid": 117, 00:17:39.764 "qid": 0, 00:17:39.764 "state": "enabled", 00:17:39.764 "thread": "nvmf_tgt_poll_group_000", 00:17:39.764 "listen_address": { 00:17:39.764 "trtype": "TCP", 00:17:39.764 "adrfam": "IPv4", 00:17:39.764 "traddr": "10.0.0.2", 00:17:39.764 "trsvcid": "4420" 00:17:39.764 }, 00:17:39.764 "peer_address": { 00:17:39.764 "trtype": "TCP", 00:17:39.764 "adrfam": "IPv4", 00:17:39.764 "traddr": "10.0.0.1", 00:17:39.764 "trsvcid": "55050" 00:17:39.764 }, 00:17:39.764 "auth": { 00:17:39.764 "state": "completed", 00:17:39.764 "digest": "sha512", 00:17:39.764 "dhgroup": "ffdhe3072" 00:17:39.764 } 00:17:39.764 } 00:17:39.764 ]' 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.764 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.021 17:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.586 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.846 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.104 00:17:41.104 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.104 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.104 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.362 { 00:17:41.362 "cntlid": 119, 00:17:41.362 "qid": 0, 00:17:41.362 "state": "enabled", 00:17:41.362 "thread": "nvmf_tgt_poll_group_000", 00:17:41.362 "listen_address": { 00:17:41.362 "trtype": "TCP", 00:17:41.362 "adrfam": "IPv4", 00:17:41.362 "traddr": "10.0.0.2", 00:17:41.362 "trsvcid": "4420" 00:17:41.362 }, 00:17:41.362 "peer_address": { 00:17:41.362 "trtype": "TCP", 00:17:41.362 "adrfam": "IPv4", 00:17:41.362 "traddr": "10.0.0.1", 00:17:41.362 "trsvcid": "55080" 00:17:41.362 }, 00:17:41.362 "auth": { 00:17:41.362 "state": "completed", 00:17:41.362 "digest": "sha512", 00:17:41.362 "dhgroup": "ffdhe3072" 00:17:41.362 } 00:17:41.362 } 00:17:41.362 ]' 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.362 17:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.362 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.362 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.362 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.620 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.187 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.445 17:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.703 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.703 { 00:17:42.703 "cntlid": 121, 00:17:42.703 "qid": 0, 00:17:42.703 "state": "enabled", 00:17:42.703 "thread": "nvmf_tgt_poll_group_000", 00:17:42.703 "listen_address": { 00:17:42.703 "trtype": "TCP", 00:17:42.703 "adrfam": "IPv4", 00:17:42.703 "traddr": "10.0.0.2", 00:17:42.703 "trsvcid": "4420" 00:17:42.703 }, 00:17:42.703 "peer_address": { 00:17:42.703 "trtype": "TCP", 00:17:42.703 "adrfam": "IPv4", 00:17:42.703 "traddr": "10.0.0.1", 00:17:42.703 "trsvcid": "55124" 00:17:42.703 }, 00:17:42.703 "auth": { 00:17:42.703 "state": "completed", 00:17:42.703 "digest": "sha512", 00:17:42.703 "dhgroup": "ffdhe4096" 00:17:42.703 } 00:17:42.703 } 00:17:42.703 ]' 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.703 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.961 17:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.526 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.784 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.126 00:17:44.126 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.126 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.126 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.415 { 00:17:44.415 "cntlid": 123, 00:17:44.415 "qid": 0, 00:17:44.415 "state": "enabled", 00:17:44.415 "thread": "nvmf_tgt_poll_group_000", 00:17:44.415 "listen_address": { 00:17:44.415 "trtype": "TCP", 00:17:44.415 "adrfam": "IPv4", 00:17:44.415 "traddr": "10.0.0.2", 00:17:44.415 "trsvcid": "4420" 00:17:44.415 }, 00:17:44.415 "peer_address": { 00:17:44.415 "trtype": "TCP", 00:17:44.415 "adrfam": "IPv4", 00:17:44.415 "traddr": "10.0.0.1", 00:17:44.415 "trsvcid": "55156" 00:17:44.415 }, 00:17:44.415 "auth": { 00:17:44.415 "state": "completed", 00:17:44.415 "digest": "sha512", 00:17:44.415 "dhgroup": "ffdhe4096" 00:17:44.415 } 00:17:44.415 } 00:17:44.415 ]' 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.415 17:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.415 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.415 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.415 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.415 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.415 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.673 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:45.238 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.239 17:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.496 00:17:45.496 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.496 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.496 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.753 { 00:17:45.753 "cntlid": 125, 00:17:45.753 "qid": 0, 00:17:45.753 "state": "enabled", 00:17:45.753 "thread": "nvmf_tgt_poll_group_000", 00:17:45.753 "listen_address": { 00:17:45.753 "trtype": "TCP", 00:17:45.753 "adrfam": "IPv4", 00:17:45.753 "traddr": "10.0.0.2", 00:17:45.753 "trsvcid": "4420" 00:17:45.753 }, 00:17:45.753 "peer_address": { 00:17:45.753 "trtype": "TCP", 00:17:45.753 "adrfam": "IPv4", 00:17:45.753 "traddr": "10.0.0.1", 00:17:45.753 "trsvcid": "55178" 00:17:45.753 }, 00:17:45.753 "auth": { 00:17:45.753 "state": "completed", 00:17:45.753 "digest": "sha512", 00:17:45.753 "dhgroup": "ffdhe4096" 00:17:45.753 } 00:17:45.753 } 00:17:45.753 ]' 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.753 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.011 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.011 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.011 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.011 17:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:46.576 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.577 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.834 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.092 00:17:47.092 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.092 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.092 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.350 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.351 { 00:17:47.351 "cntlid": 127, 00:17:47.351 "qid": 0, 00:17:47.351 "state": "enabled", 00:17:47.351 "thread": "nvmf_tgt_poll_group_000", 00:17:47.351 "listen_address": { 00:17:47.351 "trtype": "TCP", 00:17:47.351 "adrfam": "IPv4", 00:17:47.351 "traddr": "10.0.0.2", 00:17:47.351 "trsvcid": "4420" 00:17:47.351 }, 00:17:47.351 "peer_address": { 00:17:47.351 "trtype": "TCP", 00:17:47.351 "adrfam": "IPv4", 00:17:47.351 "traddr": "10.0.0.1", 00:17:47.351 "trsvcid": "55200" 00:17:47.351 }, 00:17:47.351 "auth": { 00:17:47.351 "state": "completed", 00:17:47.351 "digest": "sha512", 00:17:47.351 "dhgroup": "ffdhe4096" 00:17:47.351 } 00:17:47.351 } 00:17:47.351 ]' 00:17:47.351 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.351 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.351 17:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.351 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.351 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.351 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.351 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.351 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.608 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.174 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.431 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.432 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.432 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.432 17:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.432 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.432 17:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.689 00:17:48.689 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.689 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.689 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.946 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.946 { 00:17:48.946 "cntlid": 129, 00:17:48.946 "qid": 0, 00:17:48.946 "state": "enabled", 00:17:48.946 "thread": "nvmf_tgt_poll_group_000", 00:17:48.946 "listen_address": { 00:17:48.946 "trtype": "TCP", 00:17:48.946 "adrfam": "IPv4", 00:17:48.946 "traddr": "10.0.0.2", 00:17:48.946 "trsvcid": "4420" 00:17:48.947 }, 00:17:48.947 "peer_address": { 00:17:48.947 "trtype": "TCP", 00:17:48.947 "adrfam": "IPv4", 00:17:48.947 "traddr": "10.0.0.1", 00:17:48.947 "trsvcid": "59838" 00:17:48.947 }, 00:17:48.947 "auth": { 00:17:48.947 "state": "completed", 00:17:48.947 "digest": "sha512", 00:17:48.947 "dhgroup": "ffdhe6144" 00:17:48.947 } 00:17:48.947 } 00:17:48.947 ]' 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.947 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.204 17:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.770 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.028 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.286 00:17:50.286 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.286 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.286 17:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.544 { 00:17:50.544 "cntlid": 131, 00:17:50.544 "qid": 0, 00:17:50.544 "state": "enabled", 00:17:50.544 "thread": "nvmf_tgt_poll_group_000", 00:17:50.544 "listen_address": { 00:17:50.544 "trtype": "TCP", 00:17:50.544 "adrfam": "IPv4", 00:17:50.544 "traddr": "10.0.0.2", 00:17:50.544 "trsvcid": "4420" 00:17:50.544 }, 00:17:50.544 "peer_address": { 00:17:50.544 "trtype": "TCP", 00:17:50.544 "adrfam": "IPv4", 00:17:50.544 "traddr": "10.0.0.1", 00:17:50.544 "trsvcid": "59870" 00:17:50.544 }, 00:17:50.544 "auth": { 00:17:50.544 "state": "completed", 00:17:50.544 "digest": "sha512", 00:17:50.544 "dhgroup": "ffdhe6144" 00:17:50.544 } 00:17:50.544 } 00:17:50.544 ]' 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.544 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.802 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.368 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.369 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.369 17:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.626 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.884 00:17:51.884 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.884 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.884 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.142 { 00:17:52.142 "cntlid": 133, 00:17:52.142 "qid": 0, 00:17:52.142 "state": "enabled", 00:17:52.142 "thread": "nvmf_tgt_poll_group_000", 00:17:52.142 "listen_address": { 00:17:52.142 "trtype": "TCP", 00:17:52.142 "adrfam": "IPv4", 00:17:52.142 "traddr": "10.0.0.2", 00:17:52.142 "trsvcid": "4420" 00:17:52.142 }, 00:17:52.142 "peer_address": { 00:17:52.142 "trtype": "TCP", 00:17:52.142 "adrfam": "IPv4", 00:17:52.142 "traddr": "10.0.0.1", 00:17:52.142 "trsvcid": "59910" 00:17:52.142 }, 00:17:52.142 "auth": { 00:17:52.142 "state": "completed", 00:17:52.142 "digest": "sha512", 00:17:52.142 "dhgroup": "ffdhe6144" 00:17:52.142 } 00:17:52.142 } 00:17:52.142 ]' 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.142 17:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.399 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.964 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.223 17:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.481 00:17:53.481 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.481 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.481 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.740 { 00:17:53.740 "cntlid": 135, 00:17:53.740 "qid": 0, 00:17:53.740 "state": "enabled", 00:17:53.740 "thread": "nvmf_tgt_poll_group_000", 00:17:53.740 "listen_address": { 00:17:53.740 "trtype": "TCP", 00:17:53.740 "adrfam": "IPv4", 00:17:53.740 "traddr": "10.0.0.2", 00:17:53.740 "trsvcid": "4420" 00:17:53.740 }, 00:17:53.740 "peer_address": { 00:17:53.740 "trtype": "TCP", 00:17:53.740 "adrfam": "IPv4", 00:17:53.740 "traddr": "10.0.0.1", 00:17:53.740 "trsvcid": "59926" 00:17:53.740 }, 00:17:53.740 "auth": { 00:17:53.740 "state": "completed", 00:17:53.740 "digest": "sha512", 00:17:53.740 "dhgroup": "ffdhe6144" 00:17:53.740 } 00:17:53.740 } 00:17:53.740 ]' 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.740 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.998 17:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.562 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.820 17:26:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.820 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.820 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.077 00:17:55.077 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.077 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.077 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.335 17:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.335 { 00:17:55.335 "cntlid": 137, 00:17:55.335 "qid": 0, 00:17:55.335 "state": "enabled", 00:17:55.335 "thread": "nvmf_tgt_poll_group_000", 00:17:55.335 "listen_address": { 00:17:55.335 "trtype": "TCP", 00:17:55.335 "adrfam": "IPv4", 00:17:55.335 "traddr": "10.0.0.2", 00:17:55.335 "trsvcid": "4420" 00:17:55.335 }, 00:17:55.335 "peer_address": { 00:17:55.335 "trtype": "TCP", 00:17:55.335 "adrfam": "IPv4", 00:17:55.335 "traddr": "10.0.0.1", 00:17:55.335 "trsvcid": "59948" 00:17:55.335 }, 00:17:55.335 "auth": { 00:17:55.335 "state": "completed", 00:17:55.335 "digest": "sha512", 00:17:55.335 "dhgroup": "ffdhe8192" 00:17:55.335 } 00:17:55.335 } 00:17:55.335 ]' 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.335 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.592 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.592 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.592 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.592 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.159 17:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.418 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.984 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.984 { 00:17:56.984 "cntlid": 139, 00:17:56.984 "qid": 0, 00:17:56.984 "state": "enabled", 00:17:56.984 "thread": "nvmf_tgt_poll_group_000", 00:17:56.984 "listen_address": { 00:17:56.984 "trtype": "TCP", 00:17:56.984 "adrfam": "IPv4", 00:17:56.984 "traddr": "10.0.0.2", 00:17:56.984 "trsvcid": "4420" 00:17:56.984 }, 00:17:56.984 "peer_address": { 00:17:56.984 "trtype": "TCP", 00:17:56.984 "adrfam": "IPv4", 00:17:56.984 "traddr": "10.0.0.1", 00:17:56.984 "trsvcid": "59956" 00:17:56.984 }, 00:17:56.984 "auth": { 00:17:56.984 "state": "completed", 00:17:56.984 "digest": "sha512", 00:17:56.984 "dhgroup": "ffdhe8192" 00:17:56.984 } 00:17:56.984 } 00:17:56.984 ]' 00:17:56.984 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.242 17:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.500 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MDEyNDU4YTQ0ODBmMzYwZjA4M2NkYmVjZDgzNmU4NjFJ0Yz9: --dhchap-ctrl-secret DHHC-1:02:ZTY5M2U3OWVjMGM4NjM2NGY2ZDM1MjVmZTg0NDFkNzA3N2U3MWVkYjIwZDdjMDFkwBpKFw==: 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.067 17:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.633 00:17:58.633 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.633 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.633 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.891 { 00:17:58.891 "cntlid": 141, 00:17:58.891 "qid": 0, 00:17:58.891 "state": "enabled", 00:17:58.891 "thread": "nvmf_tgt_poll_group_000", 00:17:58.891 "listen_address": { 00:17:58.891 "trtype": "TCP", 00:17:58.891 "adrfam": "IPv4", 00:17:58.891 "traddr": "10.0.0.2", 00:17:58.891 "trsvcid": "4420" 00:17:58.891 }, 00:17:58.891 "peer_address": { 00:17:58.891 "trtype": "TCP", 00:17:58.891 "adrfam": "IPv4", 00:17:58.891 "traddr": "10.0.0.1", 00:17:58.891 "trsvcid": "34888" 00:17:58.891 }, 00:17:58.891 "auth": { 00:17:58.891 "state": "completed", 00:17:58.891 "digest": "sha512", 00:17:58.891 "dhgroup": "ffdhe8192" 00:17:58.891 } 00:17:58.891 } 00:17:58.891 ]' 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.891 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.149 17:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MmMyMjBjNzQ1OWYwZTA3ZDA2NjgxY2ZiMDgyMDZkOGZiNzU2NjVjYzhiYTEyOTk2OC6Sew==: --dhchap-ctrl-secret DHHC-1:01:ZjY2Njg3MGY5YTU2MWQ1YjgzZTdhN2E0YmZkYmJjOTJ3bji6: 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.715 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.973 17:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.539 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.540 { 00:18:00.540 "cntlid": 143, 00:18:00.540 "qid": 0, 00:18:00.540 "state": "enabled", 00:18:00.540 "thread": "nvmf_tgt_poll_group_000", 00:18:00.540 "listen_address": { 00:18:00.540 "trtype": "TCP", 00:18:00.540 "adrfam": "IPv4", 00:18:00.540 "traddr": "10.0.0.2", 00:18:00.540 "trsvcid": "4420" 00:18:00.540 }, 00:18:00.540 "peer_address": { 00:18:00.540 "trtype": "TCP", 00:18:00.540 "adrfam": "IPv4", 00:18:00.540 "traddr": "10.0.0.1", 00:18:00.540 "trsvcid": "34908" 00:18:00.540 }, 00:18:00.540 "auth": { 00:18:00.540 "state": "completed", 00:18:00.540 "digest": "sha512", 00:18:00.540 "dhgroup": "ffdhe8192" 00:18:00.540 } 00:18:00.540 } 00:18:00.540 ]' 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.540 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.798 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.798 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.798 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.798 17:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:01.363 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.621 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.209 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.209 { 00:18:02.209 "cntlid": 145, 00:18:02.209 "qid": 0, 00:18:02.209 "state": "enabled", 00:18:02.209 "thread": "nvmf_tgt_poll_group_000", 00:18:02.209 "listen_address": { 00:18:02.209 "trtype": "TCP", 00:18:02.209 "adrfam": "IPv4", 00:18:02.209 "traddr": "10.0.0.2", 00:18:02.209 "trsvcid": "4420" 00:18:02.209 }, 00:18:02.209 "peer_address": { 00:18:02.209 "trtype": "TCP", 00:18:02.209 "adrfam": "IPv4", 00:18:02.209 "traddr": "10.0.0.1", 00:18:02.209 "trsvcid": "34940" 00:18:02.209 }, 00:18:02.209 "auth": { 00:18:02.209 "state": "completed", 00:18:02.209 "digest": "sha512", 00:18:02.209 "dhgroup": "ffdhe8192" 00:18:02.209 } 00:18:02.209 } 00:18:02.209 ]' 00:18:02.209 17:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.467 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.726 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:ODM0NDI0MjEyYzc5NzNkZDA0YmJhNmQxMmZhNzlhNTExZTkwZjY5YTY4MTkwNGE3RyQ+fQ==: --dhchap-ctrl-secret DHHC-1:03:YmM2NzliOTc2ZTA2N2I1NDUxNmJhNzFhNzExNTcwYzU0ZDA1YzdkYWNjOTZjNGM1ZTRiY2JkMzhiOGJlMzI5Mf/fHv0=: 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.292 17:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.293 17:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.551 request: 00:18:03.551 { 00:18:03.551 "name": "nvme0", 00:18:03.551 "trtype": "tcp", 00:18:03.551 "traddr": "10.0.0.2", 00:18:03.551 "adrfam": "ipv4", 00:18:03.551 "trsvcid": "4420", 00:18:03.551 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:03.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.551 "prchk_reftag": false, 00:18:03.551 "prchk_guard": false, 00:18:03.551 "hdgst": false, 00:18:03.551 "ddgst": false, 00:18:03.551 "dhchap_key": "key2", 00:18:03.551 "method": "bdev_nvme_attach_controller", 00:18:03.551 "req_id": 1 00:18:03.551 } 00:18:03.551 Got JSON-RPC error response 00:18:03.551 response: 00:18:03.551 { 00:18:03.551 "code": -5, 00:18:03.551 "message": "Input/output error" 00:18:03.551 } 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:03.551 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.118 request: 00:18:04.118 { 00:18:04.118 "name": "nvme0", 00:18:04.118 "trtype": "tcp", 00:18:04.118 "traddr": "10.0.0.2", 00:18:04.118 "adrfam": "ipv4", 00:18:04.118 "trsvcid": "4420", 00:18:04.118 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:04.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:04.118 "prchk_reftag": false, 00:18:04.118 "prchk_guard": false, 00:18:04.118 "hdgst": false, 00:18:04.118 "ddgst": false, 00:18:04.118 "dhchap_key": "key1", 00:18:04.118 "dhchap_ctrlr_key": "ckey2", 00:18:04.118 "method": "bdev_nvme_attach_controller", 00:18:04.118 "req_id": 1 00:18:04.118 } 00:18:04.118 Got JSON-RPC error response 00:18:04.119 response: 00:18:04.119 { 00:18:04.119 "code": -5, 00:18:04.119 "message": "Input/output error" 00:18:04.119 } 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.119 17:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.686 request: 00:18:04.686 { 00:18:04.686 "name": "nvme0", 00:18:04.686 "trtype": "tcp", 00:18:04.686 "traddr": "10.0.0.2", 00:18:04.686 "adrfam": "ipv4", 00:18:04.686 "trsvcid": "4420", 00:18:04.686 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:04.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:04.686 "prchk_reftag": false, 00:18:04.687 "prchk_guard": false, 00:18:04.687 "hdgst": false, 00:18:04.687 "ddgst": false, 00:18:04.687 "dhchap_key": "key1", 00:18:04.687 "dhchap_ctrlr_key": "ckey1", 00:18:04.687 "method": "bdev_nvme_attach_controller", 00:18:04.687 "req_id": 1 00:18:04.687 } 00:18:04.687 Got JSON-RPC error response 00:18:04.687 response: 00:18:04.687 { 00:18:04.687 "code": -5, 00:18:04.687 "message": "Input/output error" 00:18:04.687 } 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4062766 ']' 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4062766' 00:18:04.687 killing process with pid 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4062766 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4083209 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4083209 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4083209 ']' 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.687 17:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4083209 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4083209 ']' 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.620 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.878 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.879 17:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.445 00:18:06.445 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.445 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.445 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.703 { 00:18:06.703 "cntlid": 1, 00:18:06.703 "qid": 0, 00:18:06.703 "state": "enabled", 00:18:06.703 "thread": "nvmf_tgt_poll_group_000", 00:18:06.703 "listen_address": { 00:18:06.703 "trtype": "TCP", 00:18:06.703 "adrfam": "IPv4", 00:18:06.703 "traddr": "10.0.0.2", 00:18:06.703 "trsvcid": "4420" 00:18:06.703 }, 00:18:06.703 "peer_address": { 00:18:06.703 "trtype": "TCP", 00:18:06.703 "adrfam": "IPv4", 00:18:06.703 "traddr": "10.0.0.1", 00:18:06.703 "trsvcid": "35006" 00:18:06.703 }, 00:18:06.703 "auth": { 00:18:06.703 "state": "completed", 00:18:06.703 "digest": "sha512", 00:18:06.703 "dhgroup": "ffdhe8192" 00:18:06.703 } 00:18:06.703 } 00:18:06.703 ]' 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.703 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.961 17:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzdjYTY1NDBlZTQyYTY0MDNjZDc2YTE1NmZjOWEwY2E1YzU2ZDhlOWMxZjM0ZTEzNmEzOWQyNWUwOWRlZjczZJUYea0=: 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:07.528 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.787 request: 00:18:07.787 { 00:18:07.787 "name": "nvme0", 00:18:07.787 "trtype": "tcp", 00:18:07.787 "traddr": "10.0.0.2", 00:18:07.787 "adrfam": "ipv4", 00:18:07.787 "trsvcid": "4420", 00:18:07.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.787 "prchk_reftag": false, 00:18:07.787 "prchk_guard": false, 00:18:07.787 "hdgst": false, 00:18:07.787 "ddgst": false, 00:18:07.787 "dhchap_key": "key3", 00:18:07.787 "method": "bdev_nvme_attach_controller", 00:18:07.787 "req_id": 1 00:18:07.787 } 00:18:07.787 Got JSON-RPC error response 00:18:07.787 response: 00:18:07.787 { 00:18:07.787 "code": -5, 00:18:07.787 "message": "Input/output error" 00:18:07.787 } 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:07.787 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:07.788 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.046 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.304 request: 00:18:08.304 { 00:18:08.304 "name": "nvme0", 00:18:08.304 "trtype": "tcp", 00:18:08.304 "traddr": "10.0.0.2", 00:18:08.304 "adrfam": "ipv4", 00:18:08.304 "trsvcid": "4420", 00:18:08.304 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.305 "prchk_reftag": false, 00:18:08.305 "prchk_guard": false, 00:18:08.305 "hdgst": false, 00:18:08.305 "ddgst": false, 00:18:08.305 "dhchap_key": "key3", 00:18:08.305 "method": "bdev_nvme_attach_controller", 00:18:08.305 "req_id": 1 00:18:08.305 } 00:18:08.305 Got JSON-RPC error response 00:18:08.305 response: 00:18:08.305 { 00:18:08.305 "code": -5, 00:18:08.305 "message": "Input/output error" 00:18:08.305 } 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.305 17:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.305 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:08.563 request: 00:18:08.563 { 00:18:08.563 "name": "nvme0", 00:18:08.563 "trtype": "tcp", 00:18:08.563 "traddr": "10.0.0.2", 00:18:08.563 "adrfam": "ipv4", 00:18:08.563 "trsvcid": "4420", 00:18:08.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.563 "prchk_reftag": false, 00:18:08.563 "prchk_guard": false, 00:18:08.563 "hdgst": false, 00:18:08.563 "ddgst": false, 00:18:08.563 "dhchap_key": "key0", 00:18:08.563 "dhchap_ctrlr_key": "key1", 00:18:08.563 "method": "bdev_nvme_attach_controller", 00:18:08.563 "req_id": 1 00:18:08.563 } 00:18:08.563 Got JSON-RPC error response 00:18:08.563 response: 00:18:08.563 { 00:18:08.563 "code": -5, 00:18:08.563 "message": "Input/output error" 00:18:08.563 } 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:08.563 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:08.821 00:18:08.821 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:08.821 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:08.821 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4062899 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4062899 ']' 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4062899 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.080 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4062899 00:18:09.339 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.339 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.339 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4062899' 00:18:09.339 killing process with pid 4062899 00:18:09.339 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4062899 00:18:09.339 17:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4062899 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.597 rmmod nvme_tcp 00:18:09.597 rmmod nvme_fabrics 00:18:09.597 rmmod nvme_keyring 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4083209 ']' 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4083209 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4083209 ']' 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4083209 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4083209 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4083209' 00:18:09.597 killing process with pid 4083209 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4083209 00:18:09.597 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4083209 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.855 17:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.816 17:26:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.816 17:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UBS /tmp/spdk.key-sha256.kXf /tmp/spdk.key-sha384.RiA /tmp/spdk.key-sha512.YyN /tmp/spdk.key-sha512.sv7 /tmp/spdk.key-sha384.mYs /tmp/spdk.key-sha256.TSd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:11.816 00:18:11.816 real 2m10.489s 00:18:11.816 user 4m59.566s 00:18:11.816 sys 0m20.460s 00:18:11.816 17:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.816 17:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.816 ************************************ 00:18:11.816 END TEST nvmf_auth_target 00:18:11.816 ************************************ 00:18:11.816 17:26:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.816 17:26:30 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:11.816 17:26:30 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.816 17:26:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:11.816 17:26:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.816 17:26:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.075 ************************************ 00:18:12.075 START TEST nvmf_bdevio_no_huge 00:18:12.075 ************************************ 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:12.075 * Looking for test storage... 00:18:12.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.075 17:26:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:17.341 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:17.341 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.341 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:17.342 Found net devices under 0000:86:00.0: cvl_0_0 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:17.342 Found net devices under 0000:86:00.1: cvl_0_1 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:18:17.342 00:18:17.342 --- 10.0.0.2 ping statistics --- 00:18:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.342 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:18:17.342 00:18:17.342 --- 10.0.0.1 ping statistics --- 00:18:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.342 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4087409 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4087409 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 4087409 ']' 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.342 17:26:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.342 [2024-07-12 17:26:36.028521] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:17.342 [2024-07-12 17:26:36.028567] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:17.342 [2024-07-12 17:26:36.090599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.601 [2024-07-12 17:26:36.175290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.601 [2024-07-12 17:26:36.175327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.601 [2024-07-12 17:26:36.175334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.601 [2024-07-12 17:26:36.175339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.601 [2024-07-12 17:26:36.175344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.601 [2024-07-12 17:26:36.175477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.601 [2024-07-12 17:26:36.175568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:17.601 [2024-07-12 17:26:36.175675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.601 [2024-07-12 17:26:36.175676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 [2024-07-12 17:26:36.875186] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 Malloc0 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.167 [2024-07-12 17:26:36.919449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:18.167 { 00:18:18.167 "params": { 00:18:18.167 "name": "Nvme$subsystem", 00:18:18.167 "trtype": "$TEST_TRANSPORT", 00:18:18.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.167 "adrfam": "ipv4", 00:18:18.167 "trsvcid": "$NVMF_PORT", 00:18:18.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.167 "hdgst": ${hdgst:-false}, 00:18:18.167 "ddgst": ${ddgst:-false} 00:18:18.167 }, 00:18:18.167 "method": "bdev_nvme_attach_controller" 00:18:18.167 } 00:18:18.167 EOF 00:18:18.167 )") 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:18.167 17:26:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:18.167 "params": { 00:18:18.167 "name": "Nvme1", 00:18:18.167 "trtype": "tcp", 00:18:18.167 "traddr": "10.0.0.2", 00:18:18.167 "adrfam": "ipv4", 00:18:18.167 "trsvcid": "4420", 00:18:18.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.168 "hdgst": false, 00:18:18.168 "ddgst": false 00:18:18.168 }, 00:18:18.168 "method": "bdev_nvme_attach_controller" 00:18:18.168 }' 00:18:18.425 [2024-07-12 17:26:36.968075] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:18.425 [2024-07-12 17:26:36.968123] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4087639 ] 00:18:18.425 [2024-07-12 17:26:37.027665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.425 [2024-07-12 17:26:37.113472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.425 [2024-07-12 17:26:37.113568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.425 [2024-07-12 17:26:37.113568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.683 I/O targets: 00:18:18.683 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:18.683 00:18:18.683 00:18:18.683 CUnit - A unit testing framework for C - Version 2.1-3 00:18:18.683 http://cunit.sourceforge.net/ 00:18:18.683 00:18:18.683 00:18:18.683 Suite: bdevio tests on: Nvme1n1 00:18:18.683 Test: blockdev write read block ...passed 00:18:18.683 Test: blockdev write zeroes read block ...passed 00:18:18.683 Test: blockdev write zeroes read no split ...passed 00:18:18.683 Test: blockdev write zeroes read split ...passed 00:18:18.683 Test: blockdev write zeroes read split partial ...passed 00:18:18.683 Test: blockdev reset ...[2024-07-12 17:26:37.460614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.683 [2024-07-12 17:26:37.460677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776300 (9): Bad file descriptor 00:18:18.941 [2024-07-12 17:26:37.477340] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:18.941 passed 00:18:18.941 Test: blockdev write read 8 blocks ...passed 00:18:18.941 Test: blockdev write read size > 128k ...passed 00:18:18.941 Test: blockdev write read invalid size ...passed 00:18:18.941 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:18.941 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:18.941 Test: blockdev write read max offset ...passed 00:18:18.941 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:18.941 Test: blockdev writev readv 8 blocks ...passed 00:18:18.941 Test: blockdev writev readv 30 x 1block ...passed 00:18:19.200 Test: blockdev writev readv block ...passed 00:18:19.200 Test: blockdev writev readv size > 128k ...passed 00:18:19.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:19.200 Test: blockdev comparev and writev ...[2024-07-12 17:26:37.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.728265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.728537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.728560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.728820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.728841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.728851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.729099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.729110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.200 [2024-07-12 17:26:37.729117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:19.200 passed 00:18:19.200 Test: blockdev nvme passthru rw ...passed 00:18:19.200 Test: blockdev nvme passthru vendor specific ...[2024-07-12 17:26:37.811745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.200 [2024-07-12 17:26:37.811762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.811886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.200 [2024-07-12 17:26:37.811895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.812015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.200 [2024-07-12 17:26:37.812024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:19.200 [2024-07-12 17:26:37.812136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.200 [2024-07-12 17:26:37.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:19.200 passed 00:18:19.200 Test: blockdev nvme admin passthru ...passed 00:18:19.200 Test: blockdev copy ...passed 00:18:19.200 00:18:19.200 Run Summary: Type Total Ran Passed Failed Inactive 00:18:19.200 suites 1 1 n/a 0 0 00:18:19.200 tests 23 23 23 0 0 00:18:19.200 asserts 152 152 152 0 n/a 00:18:19.200 00:18:19.200 Elapsed time = 1.231 seconds 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.459 rmmod nvme_tcp 00:18:19.459 rmmod nvme_fabrics 00:18:19.459 rmmod nvme_keyring 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4087409 ']' 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4087409 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 4087409 ']' 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 4087409 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.459 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4087409 00:18:19.717 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:19.717 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:19.717 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4087409' 00:18:19.717 killing process with pid 4087409 00:18:19.717 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 4087409 00:18:19.717 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 4087409 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.975 17:26:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.877 17:26:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:21.877 00:18:21.877 real 0m10.045s 00:18:21.877 user 0m12.920s 00:18:21.877 sys 0m4.830s 00:18:21.877 17:26:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:21.877 17:26:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.877 ************************************ 00:18:21.877 END TEST nvmf_bdevio_no_huge 00:18:21.877 ************************************ 00:18:22.136 17:26:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.136 17:26:40 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.136 17:26:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.136 17:26:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.136 17:26:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.136 ************************************ 00:18:22.136 START TEST nvmf_tls 00:18:22.136 ************************************ 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.136 * Looking for test storage... 00:18:22.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.136 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.137 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.137 17:26:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.137 17:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:27.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:27.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:27.405 Found net devices under 0000:86:00.0: cvl_0_0 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:27.405 Found net devices under 0000:86:00.1: cvl_0_1 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.405 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:18:27.664 00:18:27.664 --- 10.0.0.2 ping statistics --- 00:18:27.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.664 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:27.664 00:18:27.664 --- 10.0.0.1 ping statistics --- 00:18:27.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.664 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.664 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4091391 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4091391 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4091391 ']' 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.923 17:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 [2024-07-12 17:26:46.503192] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:27.923 [2024-07-12 17:26:46.503232] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.923 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.923 [2024-07-12 17:26:46.559757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.923 [2024-07-12 17:26:46.635920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.923 [2024-07-12 17:26:46.635958] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.923 [2024-07-12 17:26:46.635964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.923 [2024-07-12 17:26:46.635974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.923 [2024-07-12 17:26:46.635979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.923 [2024-07-12 17:26:46.635998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.859 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:28.860 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:28.860 true 00:18:28.860 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.860 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:29.118 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:29.118 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:29.118 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:29.118 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.118 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:29.376 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:29.376 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:29.376 17:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:29.634 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.634 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:29.634 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:29.634 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:29.634 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.635 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:29.893 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:29.893 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:29.893 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:30.151 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:30.151 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:30.151 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:30.151 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:30.151 17:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:30.409 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:30.409 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.N0TMusoiJJ 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.kZjjWI8MVT 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.N0TMusoiJJ 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kZjjWI8MVT 00:18:30.667 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:30.926 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:31.184 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.N0TMusoiJJ 00:18:31.184 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.N0TMusoiJJ 00:18:31.184 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.184 [2024-07-12 17:26:49.877672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.184 17:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.443 17:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.701 [2024-07-12 17:26:50.226571] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.701 [2024-07-12 17:26:50.226760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.701 17:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.701 malloc0 00:18:31.701 17:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.959 17:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0TMusoiJJ 00:18:32.217 [2024-07-12 17:26:50.740072] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:32.217 17:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.N0TMusoiJJ 00:18:32.217 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.207 Initializing NVMe Controllers 00:18:42.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.208 Initialization complete. Launching workers. 00:18:42.208 ======================================================== 00:18:42.208 Latency(us) 00:18:42.208 Device Information : IOPS MiB/s Average min max 00:18:42.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16507.89 64.48 3877.33 806.85 5666.16 00:18:42.208 ======================================================== 00:18:42.208 Total : 16507.89 64.48 3877.33 806.85 5666.16 00:18:42.208 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0TMusoiJJ 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N0TMusoiJJ' 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4093793 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4093793 /var/tmp/bdevperf.sock 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4093793 ']' 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.208 17:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 [2024-07-12 17:27:00.873339] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:42.208 [2024-07-12 17:27:00.873388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093793 ] 00:18:42.208 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.208 [2024-07-12 17:27:00.924028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.465 [2024-07-12 17:27:01.003040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.026 17:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.026 17:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.026 17:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0TMusoiJJ 00:18:43.284 [2024-07-12 17:27:01.861045] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.284 [2024-07-12 17:27:01.861133] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.284 TLSTESTn1 00:18:43.284 17:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.284 Running I/O for 10 seconds... 00:18:55.471 00:18:55.471 Latency(us) 00:18:55.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.471 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.471 Verification LBA range: start 0x0 length 0x2000 00:18:55.471 TLSTESTn1 : 10.01 5601.78 21.88 0.00 0.00 22814.09 5014.93 27012.23 00:18:55.471 =================================================================================================================== 00:18:55.471 Total : 5601.78 21.88 0.00 0.00 22814.09 5014.93 27012.23 00:18:55.471 0 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4093793 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4093793 ']' 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4093793 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4093793 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4093793' 00:18:55.471 killing process with pid 4093793 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4093793 00:18:55.471 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.471 00:18:55.471 Latency(us) 00:18:55.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.471 =================================================================================================================== 00:18:55.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.471 [2024-07-12 17:27:12.129503] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4093793 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kZjjWI8MVT 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kZjjWI8MVT 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kZjjWI8MVT 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kZjjWI8MVT' 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4096094 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4096094 /var/tmp/bdevperf.sock 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096094 ']' 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.471 17:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.471 [2024-07-12 17:27:12.357452] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:55.471 [2024-07-12 17:27:12.357498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096094 ] 00:18:55.471 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.471 [2024-07-12 17:27:12.407671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.471 [2024-07-12 17:27:12.486163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kZjjWI8MVT 00:18:55.471 [2024-07-12 17:27:13.312638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.471 [2024-07-12 17:27:13.312705] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.471 [2024-07-12 17:27:13.317107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.471 [2024-07-12 17:27:13.317804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204a570 (107): Transport endpoint is not connected 00:18:55.471 [2024-07-12 17:27:13.318797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204a570 (9): Bad file descriptor 00:18:55.471 [2024-07-12 17:27:13.319798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.471 [2024-07-12 17:27:13.319808] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.471 [2024-07-12 17:27:13.319818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.471 request: 00:18:55.471 { 00:18:55.471 "name": "TLSTEST", 00:18:55.471 "trtype": "tcp", 00:18:55.471 "traddr": "10.0.0.2", 00:18:55.471 "adrfam": "ipv4", 00:18:55.471 "trsvcid": "4420", 00:18:55.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.471 "prchk_reftag": false, 00:18:55.471 "prchk_guard": false, 00:18:55.471 "hdgst": false, 00:18:55.471 "ddgst": false, 00:18:55.471 "psk": "/tmp/tmp.kZjjWI8MVT", 00:18:55.471 "method": "bdev_nvme_attach_controller", 00:18:55.471 "req_id": 1 00:18:55.471 } 00:18:55.471 Got JSON-RPC error response 00:18:55.471 response: 00:18:55.471 { 00:18:55.471 "code": -5, 00:18:55.471 "message": "Input/output error" 00:18:55.471 } 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4096094 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096094 ']' 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096094 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096094 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096094' 00:18:55.471 killing process with pid 4096094 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096094 00:18:55.471 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.471 00:18:55.471 Latency(us) 00:18:55.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.471 =================================================================================================================== 00:18:55.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.471 [2024-07-12 17:27:13.392152] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096094 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.N0TMusoiJJ 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.N0TMusoiJJ 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:55.471 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.N0TMusoiJJ 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N0TMusoiJJ' 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4096332 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4096332 /var/tmp/bdevperf.sock 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096332 ']' 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.472 17:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.472 [2024-07-12 17:27:13.614216] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:55.472 [2024-07-12 17:27:13.614259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096332 ] 00:18:55.472 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.472 [2024-07-12 17:27:13.664162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.472 [2024-07-12 17:27:13.742097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.730 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.730 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:55.730 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.N0TMusoiJJ 00:18:55.989 [2024-07-12 17:27:14.583712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.989 [2024-07-12 17:27:14.583782] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.989 [2024-07-12 17:27:14.588683] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:55.989 [2024-07-12 17:27:14.588706] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:55.989 [2024-07-12 17:27:14.588731] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.989 [2024-07-12 17:27:14.588964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cf570 (107): Transport endpoint is not connected 00:18:55.989 [2024-07-12 17:27:14.589955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cf570 (9): Bad file descriptor 00:18:55.989 [2024-07-12 17:27:14.590956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.989 [2024-07-12 17:27:14.590965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.989 [2024-07-12 17:27:14.590974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.989 request: 00:18:55.989 { 00:18:55.989 "name": "TLSTEST", 00:18:55.989 "trtype": "tcp", 00:18:55.989 "traddr": "10.0.0.2", 00:18:55.989 "adrfam": "ipv4", 00:18:55.989 "trsvcid": "4420", 00:18:55.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:55.989 "prchk_reftag": false, 00:18:55.989 "prchk_guard": false, 00:18:55.989 "hdgst": false, 00:18:55.989 "ddgst": false, 00:18:55.989 "psk": "/tmp/tmp.N0TMusoiJJ", 00:18:55.989 "method": "bdev_nvme_attach_controller", 00:18:55.989 "req_id": 1 00:18:55.989 } 00:18:55.989 Got JSON-RPC error response 00:18:55.989 response: 00:18:55.989 { 00:18:55.989 "code": -5, 00:18:55.989 "message": "Input/output error" 00:18:55.989 } 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4096332 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096332 ']' 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096332 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096332 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096332' 00:18:55.989 killing process with pid 4096332 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096332 00:18:55.989 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.989 00:18:55.989 Latency(us) 00:18:55.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.989 =================================================================================================================== 00:18:55.989 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.989 [2024-07-12 17:27:14.652278] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.989 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096332 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0TMusoiJJ 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0TMusoiJJ 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.N0TMusoiJJ 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N0TMusoiJJ' 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4096570 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4096570 /var/tmp/bdevperf.sock 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096570 ']' 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.248 17:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.248 [2024-07-12 17:27:14.873619] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:56.248 [2024-07-12 17:27:14.873668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096570 ] 00:18:56.248 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.248 [2024-07-12 17:27:14.923222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.248 [2024-07-12 17:27:14.990331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.184 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.184 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N0TMusoiJJ 00:18:57.185 [2024-07-12 17:27:15.821063] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.185 [2024-07-12 17:27:15.821140] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:57.185 [2024-07-12 17:27:15.831854] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:57.185 [2024-07-12 17:27:15.831878] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:57.185 [2024-07-12 17:27:15.831903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:57.185 [2024-07-12 17:27:15.832353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc5570 (107): Transport endpoint is not connected 00:18:57.185 [2024-07-12 17:27:15.833346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc5570 (9): Bad file descriptor 00:18:57.185 [2024-07-12 17:27:15.834348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:57.185 [2024-07-12 17:27:15.834358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:57.185 [2024-07-12 17:27:15.834367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:57.185 request: 00:18:57.185 { 00:18:57.185 "name": "TLSTEST", 00:18:57.185 "trtype": "tcp", 00:18:57.185 "traddr": "10.0.0.2", 00:18:57.185 "adrfam": "ipv4", 00:18:57.185 "trsvcid": "4420", 00:18:57.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:57.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.185 "prchk_reftag": false, 00:18:57.185 "prchk_guard": false, 00:18:57.185 "hdgst": false, 00:18:57.185 "ddgst": false, 00:18:57.185 "psk": "/tmp/tmp.N0TMusoiJJ", 00:18:57.185 "method": "bdev_nvme_attach_controller", 00:18:57.185 "req_id": 1 00:18:57.185 } 00:18:57.185 Got JSON-RPC error response 00:18:57.185 response: 00:18:57.185 { 00:18:57.185 "code": -5, 00:18:57.185 "message": "Input/output error" 00:18:57.185 } 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4096570 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096570 ']' 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096570 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096570 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096570' 00:18:57.185 killing process with pid 4096570 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096570 00:18:57.185 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.185 00:18:57.185 Latency(us) 00:18:57.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.185 =================================================================================================================== 00:18:57.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.185 [2024-07-12 17:27:15.894639] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:57.185 17:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096570 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4096805 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4096805 /var/tmp/bdevperf.sock 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4096805 ']' 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.443 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.443 [2024-07-12 17:27:16.114810] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:57.444 [2024-07-12 17:27:16.114853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096805 ] 00:18:57.444 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.444 [2024-07-12 17:27:16.164724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.701 [2024-07-12 17:27:16.243857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.268 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.268 17:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.268 17:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:58.526 [2024-07-12 17:27:17.075048] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:58.526 [2024-07-12 17:27:17.076898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x955af0 (9): Bad file descriptor 00:18:58.526 [2024-07-12 17:27:17.077896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:58.526 [2024-07-12 17:27:17.077907] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:58.526 [2024-07-12 17:27:17.077916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:58.526 request: 00:18:58.526 { 00:18:58.526 "name": "TLSTEST", 00:18:58.526 "trtype": "tcp", 00:18:58.526 "traddr": "10.0.0.2", 00:18:58.526 "adrfam": "ipv4", 00:18:58.526 "trsvcid": "4420", 00:18:58.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.526 "prchk_reftag": false, 00:18:58.526 "prchk_guard": false, 00:18:58.526 "hdgst": false, 00:18:58.526 "ddgst": false, 00:18:58.526 "method": "bdev_nvme_attach_controller", 00:18:58.526 "req_id": 1 00:18:58.526 } 00:18:58.526 Got JSON-RPC error response 00:18:58.526 response: 00:18:58.526 { 00:18:58.526 "code": -5, 00:18:58.526 "message": "Input/output error" 00:18:58.526 } 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4096805 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4096805 ']' 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4096805 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096805 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096805' 00:18:58.526 killing process with pid 4096805 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4096805 00:18:58.526 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.526 00:18:58.526 Latency(us) 00:18:58.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.526 =================================================================================================================== 00:18:58.526 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:58.526 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4096805 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4091391 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4091391 ']' 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4091391 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.784 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4091391 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4091391' 00:18:58.785 killing process with pid 4091391 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4091391 00:18:58.785 [2024-07-12 17:27:17.353743] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4091391 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:58.785 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.BFi7x3oWHZ 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.BFi7x3oWHZ 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4097054 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4097054 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4097054 ']' 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.042 17:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.042 [2024-07-12 17:27:17.656658] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:18:59.042 [2024-07-12 17:27:17.656700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.042 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.042 [2024-07-12 17:27:17.714041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.042 [2024-07-12 17:27:17.791366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.043 [2024-07-12 17:27:17.791405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.043 [2024-07-12 17:27:17.791412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.043 [2024-07-12 17:27:17.791418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.043 [2024-07-12 17:27:17.791423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.043 [2024-07-12 17:27:17.791449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BFi7x3oWHZ 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.974 [2024-07-12 17:27:18.649814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.974 17:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:00.231 17:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.231 [2024-07-12 17:27:18.990685] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.231 [2024-07-12 17:27:18.990882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.231 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.489 malloc0 00:19:00.489 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:00.747 [2024-07-12 17:27:19.496038] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BFi7x3oWHZ 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BFi7x3oWHZ' 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4097318 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4097318 /var/tmp/bdevperf.sock 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4097318 ']' 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.747 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.748 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.748 17:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.006 [2024-07-12 17:27:19.554555] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:01.006 [2024-07-12 17:27:19.554601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4097318 ] 00:19:01.006 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.006 [2024-07-12 17:27:19.604053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.006 [2024-07-12 17:27:19.681837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.940 17:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.940 17:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.940 17:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:01.940 [2024-07-12 17:27:20.511264] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.940 [2024-07-12 17:27:20.511332] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:01.940 TLSTESTn1 00:19:01.940 17:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:01.940 Running I/O for 10 seconds... 00:19:14.139 00:19:14.139 Latency(us) 00:19:14.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.139 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.139 Verification LBA range: start 0x0 length 0x2000 00:19:14.139 TLSTESTn1 : 10.01 5483.63 21.42 0.00 0.00 23306.74 4843.97 35104.50 00:19:14.139 =================================================================================================================== 00:19:14.139 Total : 5483.63 21.42 0.00 0.00 23306.74 4843.97 35104.50 00:19:14.139 0 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4097318 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4097318 ']' 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4097318 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4097318 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4097318' 00:19:14.139 killing process with pid 4097318 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4097318 00:19:14.139 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.139 00:19:14.139 Latency(us) 00:19:14.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.139 =================================================================================================================== 00:19:14.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.139 [2024-07-12 17:27:30.784814] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4097318 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.BFi7x3oWHZ 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BFi7x3oWHZ 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BFi7x3oWHZ 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BFi7x3oWHZ 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BFi7x3oWHZ' 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4099161 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4099161 /var/tmp/bdevperf.sock 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4099161 ']' 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.139 17:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.139 [2024-07-12 17:27:31.014405] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:14.139 [2024-07-12 17:27:31.014452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099161 ] 00:19:14.139 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.139 [2024-07-12 17:27:31.065713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.139 [2024-07-12 17:27:31.144659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.139 17:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.139 17:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:14.139 17:27:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:14.139 [2024-07-12 17:27:31.979469] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.139 [2024-07-12 17:27:31.979509] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:14.139 [2024-07-12 17:27:31.979515] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.BFi7x3oWHZ 00:19:14.139 request: 00:19:14.139 { 00:19:14.139 "name": "TLSTEST", 00:19:14.139 "trtype": "tcp", 00:19:14.139 "traddr": "10.0.0.2", 00:19:14.139 "adrfam": "ipv4", 00:19:14.139 "trsvcid": "4420", 00:19:14.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.139 "prchk_reftag": false, 00:19:14.139 "prchk_guard": false, 00:19:14.139 "hdgst": false, 00:19:14.139 "ddgst": false, 00:19:14.139 "psk": "/tmp/tmp.BFi7x3oWHZ", 00:19:14.139 "method": "bdev_nvme_attach_controller", 00:19:14.139 "req_id": 1 00:19:14.139 } 00:19:14.139 Got JSON-RPC error response 00:19:14.139 response: 00:19:14.139 { 00:19:14.139 "code": -1, 00:19:14.139 "message": "Operation not permitted" 00:19:14.139 } 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4099161 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4099161 ']' 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4099161 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099161 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099161' 00:19:14.139 killing process with pid 4099161 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4099161 00:19:14.139 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.139 00:19:14.139 Latency(us) 00:19:14.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.139 =================================================================================================================== 00:19:14.139 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4099161 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4097054 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4097054 ']' 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4097054 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4097054 00:19:14.139 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4097054' 00:19:14.140 killing process with pid 4097054 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4097054 00:19:14.140 [2024-07-12 17:27:32.270962] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4097054 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4099417 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4099417 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4099417 ']' 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.140 17:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.140 [2024-07-12 17:27:32.509503] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:14.140 [2024-07-12 17:27:32.509550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.140 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.140 [2024-07-12 17:27:32.567286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.140 [2024-07-12 17:27:32.645335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.140 [2024-07-12 17:27:32.645368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.140 [2024-07-12 17:27:32.645375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.140 [2024-07-12 17:27:32.645386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.140 [2024-07-12 17:27:32.645407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.140 [2024-07-12 17:27:32.645424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BFi7x3oWHZ 00:19:14.706 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:14.964 [2024-07-12 17:27:33.499761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.964 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:14.964 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:15.222 [2024-07-12 17:27:33.836617] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.222 [2024-07-12 17:27:33.836794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.222 17:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:15.480 malloc0 00:19:15.480 17:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:15.480 17:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:15.738 [2024-07-12 17:27:34.354087] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:15.738 [2024-07-12 17:27:34.354113] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:15.738 [2024-07-12 17:27:34.354136] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:15.738 request: 00:19:15.738 { 00:19:15.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.738 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.738 "psk": "/tmp/tmp.BFi7x3oWHZ", 00:19:15.738 "method": "nvmf_subsystem_add_host", 00:19:15.738 "req_id": 1 00:19:15.738 } 00:19:15.738 Got JSON-RPC error response 00:19:15.738 response: 00:19:15.738 { 00:19:15.738 "code": -32603, 00:19:15.738 "message": "Internal error" 00:19:15.738 } 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4099417 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4099417 ']' 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4099417 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099417 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099417' 00:19:15.738 killing process with pid 4099417 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4099417 00:19:15.738 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4099417 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.BFi7x3oWHZ 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4099886 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4099886 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4099886 ']' 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.995 17:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 [2024-07-12 17:27:34.654758] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:15.995 [2024-07-12 17:27:34.654805] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.995 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.995 [2024-07-12 17:27:34.712205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.252 [2024-07-12 17:27:34.791988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.252 [2024-07-12 17:27:34.792022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.252 [2024-07-12 17:27:34.792029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.252 [2024-07-12 17:27:34.792035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.252 [2024-07-12 17:27:34.792040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.252 [2024-07-12 17:27:34.792064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BFi7x3oWHZ 00:19:16.816 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:17.074 [2024-07-12 17:27:35.646955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.074 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:17.074 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:17.331 [2024-07-12 17:27:35.975804] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.331 [2024-07-12 17:27:35.975998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.331 17:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.588 malloc0 00:19:17.588 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.588 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:17.846 [2024-07-12 17:27:36.477122] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4100148 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4100148 /var/tmp/bdevperf.sock 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4100148 ']' 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.846 17:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.846 [2024-07-12 17:27:36.536703] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:17.846 [2024-07-12 17:27:36.536746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100148 ] 00:19:17.846 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.846 [2024-07-12 17:27:36.586161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.162 [2024-07-12 17:27:36.660234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.727 17:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.727 17:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.727 17:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:18.727 [2024-07-12 17:27:37.486927] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.727 [2024-07-12 17:27:37.486991] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.985 TLSTESTn1 00:19:18.985 17:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:19.243 17:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:19.243 "subsystems": [ 00:19:19.243 { 00:19:19.243 "subsystem": "keyring", 00:19:19.243 "config": [] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "iobuf", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "iobuf_set_options", 00:19:19.243 "params": { 00:19:19.243 "small_pool_count": 8192, 00:19:19.243 "large_pool_count": 1024, 00:19:19.243 "small_bufsize": 8192, 00:19:19.243 "large_bufsize": 135168 00:19:19.243 } 00:19:19.243 } 00:19:19.243 ] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "sock", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "sock_set_default_impl", 00:19:19.243 "params": { 00:19:19.243 "impl_name": "posix" 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "sock_impl_set_options", 00:19:19.243 "params": { 00:19:19.243 "impl_name": "ssl", 00:19:19.243 "recv_buf_size": 4096, 00:19:19.243 "send_buf_size": 4096, 00:19:19.243 "enable_recv_pipe": true, 00:19:19.243 "enable_quickack": false, 00:19:19.243 "enable_placement_id": 0, 00:19:19.243 "enable_zerocopy_send_server": true, 00:19:19.243 "enable_zerocopy_send_client": false, 00:19:19.243 "zerocopy_threshold": 0, 00:19:19.243 "tls_version": 0, 00:19:19.243 "enable_ktls": false 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "sock_impl_set_options", 00:19:19.243 "params": { 00:19:19.243 "impl_name": "posix", 00:19:19.243 "recv_buf_size": 2097152, 00:19:19.243 "send_buf_size": 2097152, 00:19:19.243 "enable_recv_pipe": true, 00:19:19.243 "enable_quickack": false, 00:19:19.243 "enable_placement_id": 0, 00:19:19.243 "enable_zerocopy_send_server": true, 00:19:19.243 "enable_zerocopy_send_client": false, 00:19:19.243 "zerocopy_threshold": 0, 00:19:19.243 "tls_version": 0, 00:19:19.243 "enable_ktls": false 00:19:19.243 } 00:19:19.243 } 00:19:19.243 ] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "vmd", 00:19:19.243 "config": [] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "accel", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "accel_set_options", 00:19:19.243 "params": { 00:19:19.243 "small_cache_size": 128, 00:19:19.243 "large_cache_size": 16, 00:19:19.243 "task_count": 2048, 00:19:19.243 "sequence_count": 2048, 00:19:19.243 "buf_count": 2048 00:19:19.243 } 00:19:19.243 } 00:19:19.243 ] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "bdev", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "bdev_set_options", 00:19:19.243 "params": { 00:19:19.243 "bdev_io_pool_size": 65535, 00:19:19.243 "bdev_io_cache_size": 256, 00:19:19.243 "bdev_auto_examine": true, 00:19:19.243 "iobuf_small_cache_size": 128, 00:19:19.243 "iobuf_large_cache_size": 16 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_raid_set_options", 00:19:19.243 "params": { 00:19:19.243 "process_window_size_kb": 1024 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_iscsi_set_options", 00:19:19.243 "params": { 00:19:19.243 "timeout_sec": 30 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_nvme_set_options", 00:19:19.243 "params": { 00:19:19.243 "action_on_timeout": "none", 00:19:19.243 "timeout_us": 0, 00:19:19.243 "timeout_admin_us": 0, 00:19:19.243 "keep_alive_timeout_ms": 10000, 00:19:19.243 "arbitration_burst": 0, 00:19:19.243 "low_priority_weight": 0, 00:19:19.243 "medium_priority_weight": 0, 00:19:19.243 "high_priority_weight": 0, 00:19:19.243 "nvme_adminq_poll_period_us": 10000, 00:19:19.243 "nvme_ioq_poll_period_us": 0, 00:19:19.243 "io_queue_requests": 0, 00:19:19.243 "delay_cmd_submit": true, 00:19:19.243 "transport_retry_count": 4, 00:19:19.243 "bdev_retry_count": 3, 00:19:19.243 "transport_ack_timeout": 0, 00:19:19.243 "ctrlr_loss_timeout_sec": 0, 00:19:19.243 "reconnect_delay_sec": 0, 00:19:19.243 "fast_io_fail_timeout_sec": 0, 00:19:19.243 "disable_auto_failback": false, 00:19:19.243 "generate_uuids": false, 00:19:19.243 "transport_tos": 0, 00:19:19.243 "nvme_error_stat": false, 00:19:19.243 "rdma_srq_size": 0, 00:19:19.243 "io_path_stat": false, 00:19:19.243 "allow_accel_sequence": false, 00:19:19.243 "rdma_max_cq_size": 0, 00:19:19.243 "rdma_cm_event_timeout_ms": 0, 00:19:19.243 "dhchap_digests": [ 00:19:19.243 "sha256", 00:19:19.243 "sha384", 00:19:19.243 "sha512" 00:19:19.243 ], 00:19:19.243 "dhchap_dhgroups": [ 00:19:19.243 "null", 00:19:19.243 "ffdhe2048", 00:19:19.243 "ffdhe3072", 00:19:19.243 "ffdhe4096", 00:19:19.243 "ffdhe6144", 00:19:19.243 "ffdhe8192" 00:19:19.243 ] 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_nvme_set_hotplug", 00:19:19.243 "params": { 00:19:19.243 "period_us": 100000, 00:19:19.243 "enable": false 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_malloc_create", 00:19:19.243 "params": { 00:19:19.243 "name": "malloc0", 00:19:19.243 "num_blocks": 8192, 00:19:19.243 "block_size": 4096, 00:19:19.243 "physical_block_size": 4096, 00:19:19.243 "uuid": "918927e2-048e-4c89-a42d-c7f747ca5ece", 00:19:19.243 "optimal_io_boundary": 0 00:19:19.243 } 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "method": "bdev_wait_for_examine" 00:19:19.243 } 00:19:19.243 ] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "nbd", 00:19:19.243 "config": [] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "scheduler", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "framework_set_scheduler", 00:19:19.243 "params": { 00:19:19.243 "name": "static" 00:19:19.243 } 00:19:19.243 } 00:19:19.243 ] 00:19:19.243 }, 00:19:19.243 { 00:19:19.243 "subsystem": "nvmf", 00:19:19.243 "config": [ 00:19:19.243 { 00:19:19.243 "method": "nvmf_set_config", 00:19:19.243 "params": { 00:19:19.243 "discovery_filter": "match_any", 00:19:19.243 "admin_cmd_passthru": { 00:19:19.243 "identify_ctrlr": false 00:19:19.243 } 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_set_max_subsystems", 00:19:19.244 "params": { 00:19:19.244 "max_subsystems": 1024 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_set_crdt", 00:19:19.244 "params": { 00:19:19.244 "crdt1": 0, 00:19:19.244 "crdt2": 0, 00:19:19.244 "crdt3": 0 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_create_transport", 00:19:19.244 "params": { 00:19:19.244 "trtype": "TCP", 00:19:19.244 "max_queue_depth": 128, 00:19:19.244 "max_io_qpairs_per_ctrlr": 127, 00:19:19.244 "in_capsule_data_size": 4096, 00:19:19.244 "max_io_size": 131072, 00:19:19.244 "io_unit_size": 131072, 00:19:19.244 "max_aq_depth": 128, 00:19:19.244 "num_shared_buffers": 511, 00:19:19.244 "buf_cache_size": 4294967295, 00:19:19.244 "dif_insert_or_strip": false, 00:19:19.244 "zcopy": false, 00:19:19.244 "c2h_success": false, 00:19:19.244 "sock_priority": 0, 00:19:19.244 "abort_timeout_sec": 1, 00:19:19.244 "ack_timeout": 0, 00:19:19.244 "data_wr_pool_size": 0 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_create_subsystem", 00:19:19.244 "params": { 00:19:19.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.244 "allow_any_host": false, 00:19:19.244 "serial_number": "SPDK00000000000001", 00:19:19.244 "model_number": "SPDK bdev Controller", 00:19:19.244 "max_namespaces": 10, 00:19:19.244 "min_cntlid": 1, 00:19:19.244 "max_cntlid": 65519, 00:19:19.244 "ana_reporting": false 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_subsystem_add_host", 00:19:19.244 "params": { 00:19:19.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.244 "host": "nqn.2016-06.io.spdk:host1", 00:19:19.244 "psk": "/tmp/tmp.BFi7x3oWHZ" 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_subsystem_add_ns", 00:19:19.244 "params": { 00:19:19.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.244 "namespace": { 00:19:19.244 "nsid": 1, 00:19:19.244 "bdev_name": "malloc0", 00:19:19.244 "nguid": "918927E2048E4C89A42DC7F747CA5ECE", 00:19:19.244 "uuid": "918927e2-048e-4c89-a42d-c7f747ca5ece", 00:19:19.244 "no_auto_visible": false 00:19:19.244 } 00:19:19.244 } 00:19:19.244 }, 00:19:19.244 { 00:19:19.244 "method": "nvmf_subsystem_add_listener", 00:19:19.244 "params": { 00:19:19.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.244 "listen_address": { 00:19:19.244 "trtype": "TCP", 00:19:19.244 "adrfam": "IPv4", 00:19:19.244 "traddr": "10.0.0.2", 00:19:19.244 "trsvcid": "4420" 00:19:19.244 }, 00:19:19.244 "secure_channel": true 00:19:19.244 } 00:19:19.244 } 00:19:19.244 ] 00:19:19.244 } 00:19:19.244 ] 00:19:19.244 }' 00:19:19.244 17:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:19.502 17:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:19.502 "subsystems": [ 00:19:19.502 { 00:19:19.502 "subsystem": "keyring", 00:19:19.502 "config": [] 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "subsystem": "iobuf", 00:19:19.502 "config": [ 00:19:19.502 { 00:19:19.502 "method": "iobuf_set_options", 00:19:19.502 "params": { 00:19:19.502 "small_pool_count": 8192, 00:19:19.502 "large_pool_count": 1024, 00:19:19.502 "small_bufsize": 8192, 00:19:19.502 "large_bufsize": 135168 00:19:19.502 } 00:19:19.502 } 00:19:19.502 ] 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "subsystem": "sock", 00:19:19.502 "config": [ 00:19:19.502 { 00:19:19.502 "method": "sock_set_default_impl", 00:19:19.502 "params": { 00:19:19.502 "impl_name": "posix" 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "sock_impl_set_options", 00:19:19.502 "params": { 00:19:19.502 "impl_name": "ssl", 00:19:19.502 "recv_buf_size": 4096, 00:19:19.502 "send_buf_size": 4096, 00:19:19.502 "enable_recv_pipe": true, 00:19:19.502 "enable_quickack": false, 00:19:19.502 "enable_placement_id": 0, 00:19:19.502 "enable_zerocopy_send_server": true, 00:19:19.502 "enable_zerocopy_send_client": false, 00:19:19.502 "zerocopy_threshold": 0, 00:19:19.502 "tls_version": 0, 00:19:19.502 "enable_ktls": false 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "sock_impl_set_options", 00:19:19.502 "params": { 00:19:19.502 "impl_name": "posix", 00:19:19.502 "recv_buf_size": 2097152, 00:19:19.502 "send_buf_size": 2097152, 00:19:19.502 "enable_recv_pipe": true, 00:19:19.502 "enable_quickack": false, 00:19:19.502 "enable_placement_id": 0, 00:19:19.502 "enable_zerocopy_send_server": true, 00:19:19.502 "enable_zerocopy_send_client": false, 00:19:19.502 "zerocopy_threshold": 0, 00:19:19.502 "tls_version": 0, 00:19:19.502 "enable_ktls": false 00:19:19.502 } 00:19:19.502 } 00:19:19.502 ] 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "subsystem": "vmd", 00:19:19.502 "config": [] 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "subsystem": "accel", 00:19:19.502 "config": [ 00:19:19.502 { 00:19:19.502 "method": "accel_set_options", 00:19:19.502 "params": { 00:19:19.502 "small_cache_size": 128, 00:19:19.502 "large_cache_size": 16, 00:19:19.502 "task_count": 2048, 00:19:19.502 "sequence_count": 2048, 00:19:19.502 "buf_count": 2048 00:19:19.502 } 00:19:19.502 } 00:19:19.502 ] 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "subsystem": "bdev", 00:19:19.502 "config": [ 00:19:19.502 { 00:19:19.502 "method": "bdev_set_options", 00:19:19.502 "params": { 00:19:19.502 "bdev_io_pool_size": 65535, 00:19:19.502 "bdev_io_cache_size": 256, 00:19:19.502 "bdev_auto_examine": true, 00:19:19.502 "iobuf_small_cache_size": 128, 00:19:19.502 "iobuf_large_cache_size": 16 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "bdev_raid_set_options", 00:19:19.502 "params": { 00:19:19.502 "process_window_size_kb": 1024 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "bdev_iscsi_set_options", 00:19:19.502 "params": { 00:19:19.502 "timeout_sec": 30 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "bdev_nvme_set_options", 00:19:19.502 "params": { 00:19:19.502 "action_on_timeout": "none", 00:19:19.502 "timeout_us": 0, 00:19:19.502 "timeout_admin_us": 0, 00:19:19.502 "keep_alive_timeout_ms": 10000, 00:19:19.502 "arbitration_burst": 0, 00:19:19.502 "low_priority_weight": 0, 00:19:19.502 "medium_priority_weight": 0, 00:19:19.502 "high_priority_weight": 0, 00:19:19.502 "nvme_adminq_poll_period_us": 10000, 00:19:19.502 "nvme_ioq_poll_period_us": 0, 00:19:19.502 "io_queue_requests": 512, 00:19:19.502 "delay_cmd_submit": true, 00:19:19.502 "transport_retry_count": 4, 00:19:19.502 "bdev_retry_count": 3, 00:19:19.502 "transport_ack_timeout": 0, 00:19:19.502 "ctrlr_loss_timeout_sec": 0, 00:19:19.502 "reconnect_delay_sec": 0, 00:19:19.502 "fast_io_fail_timeout_sec": 0, 00:19:19.502 "disable_auto_failback": false, 00:19:19.502 "generate_uuids": false, 00:19:19.502 "transport_tos": 0, 00:19:19.502 "nvme_error_stat": false, 00:19:19.502 "rdma_srq_size": 0, 00:19:19.502 "io_path_stat": false, 00:19:19.502 "allow_accel_sequence": false, 00:19:19.502 "rdma_max_cq_size": 0, 00:19:19.502 "rdma_cm_event_timeout_ms": 0, 00:19:19.502 "dhchap_digests": [ 00:19:19.502 "sha256", 00:19:19.502 "sha384", 00:19:19.502 "sha512" 00:19:19.502 ], 00:19:19.502 "dhchap_dhgroups": [ 00:19:19.502 "null", 00:19:19.502 "ffdhe2048", 00:19:19.502 "ffdhe3072", 00:19:19.502 "ffdhe4096", 00:19:19.502 "ffdhe6144", 00:19:19.502 "ffdhe8192" 00:19:19.502 ] 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "bdev_nvme_attach_controller", 00:19:19.502 "params": { 00:19:19.502 "name": "TLSTEST", 00:19:19.502 "trtype": "TCP", 00:19:19.502 "adrfam": "IPv4", 00:19:19.502 "traddr": "10.0.0.2", 00:19:19.502 "trsvcid": "4420", 00:19:19.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.502 "prchk_reftag": false, 00:19:19.502 "prchk_guard": false, 00:19:19.502 "ctrlr_loss_timeout_sec": 0, 00:19:19.502 "reconnect_delay_sec": 0, 00:19:19.502 "fast_io_fail_timeout_sec": 0, 00:19:19.502 "psk": "/tmp/tmp.BFi7x3oWHZ", 00:19:19.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.502 "hdgst": false, 00:19:19.502 "ddgst": false 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.502 "method": "bdev_nvme_set_hotplug", 00:19:19.502 "params": { 00:19:19.502 "period_us": 100000, 00:19:19.502 "enable": false 00:19:19.502 } 00:19:19.502 }, 00:19:19.502 { 00:19:19.503 "method": "bdev_wait_for_examine" 00:19:19.503 } 00:19:19.503 ] 00:19:19.503 }, 00:19:19.503 { 00:19:19.503 "subsystem": "nbd", 00:19:19.503 "config": [] 00:19:19.503 } 00:19:19.503 ] 00:19:19.503 }' 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4100148 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4100148 ']' 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4100148 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100148 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100148' 00:19:19.503 killing process with pid 4100148 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4100148 00:19:19.503 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.503 00:19:19.503 Latency(us) 00:19:19.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.503 =================================================================================================================== 00:19:19.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.503 [2024-07-12 17:27:38.125530] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:19.503 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4100148 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4099886 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4099886 ']' 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4099886 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099886 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099886' 00:19:19.761 killing process with pid 4099886 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4099886 00:19:19.761 [2024-07-12 17:27:38.350559] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:19.761 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4099886 00:19:20.019 17:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:20.019 17:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.019 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.019 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.019 17:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:20.019 "subsystems": [ 00:19:20.019 { 00:19:20.019 "subsystem": "keyring", 00:19:20.019 "config": [] 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "subsystem": "iobuf", 00:19:20.019 "config": [ 00:19:20.019 { 00:19:20.019 "method": "iobuf_set_options", 00:19:20.019 "params": { 00:19:20.019 "small_pool_count": 8192, 00:19:20.019 "large_pool_count": 1024, 00:19:20.019 "small_bufsize": 8192, 00:19:20.019 "large_bufsize": 135168 00:19:20.019 } 00:19:20.019 } 00:19:20.019 ] 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "subsystem": "sock", 00:19:20.019 "config": [ 00:19:20.019 { 00:19:20.019 "method": "sock_set_default_impl", 00:19:20.019 "params": { 00:19:20.019 "impl_name": "posix" 00:19:20.019 } 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "method": "sock_impl_set_options", 00:19:20.019 "params": { 00:19:20.019 "impl_name": "ssl", 00:19:20.019 "recv_buf_size": 4096, 00:19:20.019 "send_buf_size": 4096, 00:19:20.019 "enable_recv_pipe": true, 00:19:20.019 "enable_quickack": false, 00:19:20.019 "enable_placement_id": 0, 00:19:20.019 "enable_zerocopy_send_server": true, 00:19:20.019 "enable_zerocopy_send_client": false, 00:19:20.019 "zerocopy_threshold": 0, 00:19:20.019 "tls_version": 0, 00:19:20.019 "enable_ktls": false 00:19:20.019 } 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "method": "sock_impl_set_options", 00:19:20.019 "params": { 00:19:20.019 "impl_name": "posix", 00:19:20.019 "recv_buf_size": 2097152, 00:19:20.019 "send_buf_size": 2097152, 00:19:20.019 "enable_recv_pipe": true, 00:19:20.019 "enable_quickack": false, 00:19:20.019 "enable_placement_id": 0, 00:19:20.019 "enable_zerocopy_send_server": true, 00:19:20.019 "enable_zerocopy_send_client": false, 00:19:20.019 "zerocopy_threshold": 0, 00:19:20.019 "tls_version": 0, 00:19:20.019 "enable_ktls": false 00:19:20.019 } 00:19:20.019 } 00:19:20.019 ] 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "subsystem": "vmd", 00:19:20.019 "config": [] 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "subsystem": "accel", 00:19:20.019 "config": [ 00:19:20.019 { 00:19:20.019 "method": "accel_set_options", 00:19:20.019 "params": { 00:19:20.019 "small_cache_size": 128, 00:19:20.019 "large_cache_size": 16, 00:19:20.019 "task_count": 2048, 00:19:20.019 "sequence_count": 2048, 00:19:20.019 "buf_count": 2048 00:19:20.019 } 00:19:20.019 } 00:19:20.019 ] 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "subsystem": "bdev", 00:19:20.019 "config": [ 00:19:20.019 { 00:19:20.019 "method": "bdev_set_options", 00:19:20.019 "params": { 00:19:20.019 "bdev_io_pool_size": 65535, 00:19:20.019 "bdev_io_cache_size": 256, 00:19:20.019 "bdev_auto_examine": true, 00:19:20.019 "iobuf_small_cache_size": 128, 00:19:20.019 "iobuf_large_cache_size": 16 00:19:20.019 } 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "method": "bdev_raid_set_options", 00:19:20.019 "params": { 00:19:20.019 "process_window_size_kb": 1024 00:19:20.019 } 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "method": "bdev_iscsi_set_options", 00:19:20.019 "params": { 00:19:20.019 "timeout_sec": 30 00:19:20.019 } 00:19:20.019 }, 00:19:20.019 { 00:19:20.019 "method": "bdev_nvme_set_options", 00:19:20.019 "params": { 00:19:20.019 "action_on_timeout": "none", 00:19:20.019 "timeout_us": 0, 00:19:20.019 "timeout_admin_us": 0, 00:19:20.019 "keep_alive_timeout_ms": 10000, 00:19:20.019 "arbitration_burst": 0, 00:19:20.019 "low_priority_weight": 0, 00:19:20.019 "medium_priority_weight": 0, 00:19:20.019 "high_priority_weight": 0, 00:19:20.019 "nvme_adminq_poll_period_us": 10000, 00:19:20.019 "nvme_ioq_poll_period_us": 0, 00:19:20.019 "io_queue_requests": 0, 00:19:20.019 "delay_cmd_submit": true, 00:19:20.019 "transport_retry_count": 4, 00:19:20.019 "bdev_retry_count": 3, 00:19:20.019 "transport_ack_timeout": 0, 00:19:20.019 "ctrlr_loss_timeout_sec": 0, 00:19:20.019 "reconnect_delay_sec": 0, 00:19:20.019 "fast_io_fail_timeout_sec": 0, 00:19:20.019 "disable_auto_failback": false, 00:19:20.019 "generate_uuids": false, 00:19:20.019 "transport_tos": 0, 00:19:20.019 "nvme_error_stat": false, 00:19:20.019 "rdma_srq_size": 0, 00:19:20.019 "io_path_stat": false, 00:19:20.019 "allow_accel_sequence": false, 00:19:20.019 "rdma_max_cq_size": 0, 00:19:20.019 "rdma_cm_event_timeout_ms": 0, 00:19:20.019 "dhchap_digests": [ 00:19:20.019 "sha256", 00:19:20.019 "sha384", 00:19:20.019 "sha512" 00:19:20.019 ], 00:19:20.019 "dhchap_dhgroups": [ 00:19:20.019 "null", 00:19:20.019 "ffdhe2048", 00:19:20.019 "ffdhe3072", 00:19:20.019 "ffdhe4096", 00:19:20.019 "ffdhe6144", 00:19:20.020 "ffdhe8192" 00:19:20.020 ] 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "bdev_nvme_set_hotplug", 00:19:20.020 "params": { 00:19:20.020 "period_us": 100000, 00:19:20.020 "enable": false 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "bdev_malloc_create", 00:19:20.020 "params": { 00:19:20.020 "name": "malloc0", 00:19:20.020 "num_blocks": 8192, 00:19:20.020 "block_size": 4096, 00:19:20.020 "physical_block_size": 4096, 00:19:20.020 "uuid": "918927e2-048e-4c89-a42d-c7f747ca5ece", 00:19:20.020 "optimal_io_boundary": 0 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "bdev_wait_for_examine" 00:19:20.020 } 00:19:20.020 ] 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "subsystem": "nbd", 00:19:20.020 "config": [] 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "subsystem": "scheduler", 00:19:20.020 "config": [ 00:19:20.020 { 00:19:20.020 "method": "framework_set_scheduler", 00:19:20.020 "params": { 00:19:20.020 "name": "static" 00:19:20.020 } 00:19:20.020 } 00:19:20.020 ] 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "subsystem": "nvmf", 00:19:20.020 "config": [ 00:19:20.020 { 00:19:20.020 "method": "nvmf_set_config", 00:19:20.020 "params": { 00:19:20.020 "discovery_filter": "match_any", 00:19:20.020 "admin_cmd_passthru": { 00:19:20.020 "identify_ctrlr": false 00:19:20.020 } 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_set_max_subsystems", 00:19:20.020 "params": { 00:19:20.020 "max_subsystems": 1024 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_set_crdt", 00:19:20.020 "params": { 00:19:20.020 "crdt1": 0, 00:19:20.020 "crdt2": 0, 00:19:20.020 "crdt3": 0 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_create_transport", 00:19:20.020 "params": { 00:19:20.020 "trtype": "TCP", 00:19:20.020 "max_queue_depth": 128, 00:19:20.020 "max_io_qpairs_per_ctrlr": 127, 00:19:20.020 "in_capsule_data_size": 4096, 00:19:20.020 "max_io_size": 131072, 00:19:20.020 "io_unit_size": 131072, 00:19:20.020 "max_aq_depth": 128, 00:19:20.020 "num_shared_buffers": 511, 00:19:20.020 "buf_cache_size": 4294967295, 00:19:20.020 "dif_insert_or_strip": false, 00:19:20.020 "zcopy": false, 00:19:20.020 "c2h_success": false, 00:19:20.020 "sock_priority": 0, 00:19:20.020 "abort_timeout_sec": 1, 00:19:20.020 "ack_timeout": 0, 00:19:20.020 "data_wr_pool_size": 0 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_create_subsystem", 00:19:20.020 "params": { 00:19:20.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.020 "allow_any_host": false, 00:19:20.020 "serial_number": "SPDK00000000000001", 00:19:20.020 "model_number": "SPDK bdev Controller", 00:19:20.020 "max_namespaces": 10, 00:19:20.020 "min_cntlid": 1, 00:19:20.020 "max_cntlid": 65519, 00:19:20.020 "ana_reporting": false 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_subsystem_add_host", 00:19:20.020 "params": { 00:19:20.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.020 "host": "nqn.2016-06.io.spdk:host1", 00:19:20.020 "psk": "/tmp/tmp.BFi7x3oWHZ" 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_subsystem_add_ns", 00:19:20.020 "params": { 00:19:20.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.020 "namespace": { 00:19:20.020 "nsid": 1, 00:19:20.020 "bdev_name": "malloc0", 00:19:20.020 "nguid": "918927E2048E4C89A42DC7F747CA5ECE", 00:19:20.020 "uuid": "918927e2-048e-4c89-a42d-c7f747ca5ece", 00:19:20.020 "no_auto_visible": false 00:19:20.020 } 00:19:20.020 } 00:19:20.020 }, 00:19:20.020 { 00:19:20.020 "method": "nvmf_subsystem_add_listener", 00:19:20.020 "params": { 00:19:20.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.020 "listen_address": { 00:19:20.020 "trtype": "TCP", 00:19:20.020 "adrfam": "IPv4", 00:19:20.020 "traddr": "10.0.0.2", 00:19:20.020 "trsvcid": "4420" 00:19:20.020 }, 00:19:20.020 "secure_channel": true 00:19:20.020 } 00:19:20.020 } 00:19:20.020 ] 00:19:20.020 } 00:19:20.020 ] 00:19:20.020 }' 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4100620 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4100620 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4100620 ']' 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.020 17:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.020 [2024-07-12 17:27:38.595918] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:20.020 [2024-07-12 17:27:38.595961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.020 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.020 [2024-07-12 17:27:38.652166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.020 [2024-07-12 17:27:38.730343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.020 [2024-07-12 17:27:38.730381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.020 [2024-07-12 17:27:38.730388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.020 [2024-07-12 17:27:38.730394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.020 [2024-07-12 17:27:38.730399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.020 [2024-07-12 17:27:38.730453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.277 [2024-07-12 17:27:38.932870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.277 [2024-07-12 17:27:38.948847] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:20.277 [2024-07-12 17:27:38.964896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.277 [2024-07-12 17:27:38.972702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4100655 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4100655 /var/tmp/bdevperf.sock 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4100655 ']' 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:20.843 17:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:20.843 "subsystems": [ 00:19:20.843 { 00:19:20.843 "subsystem": "keyring", 00:19:20.843 "config": [] 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "subsystem": "iobuf", 00:19:20.843 "config": [ 00:19:20.843 { 00:19:20.843 "method": "iobuf_set_options", 00:19:20.843 "params": { 00:19:20.843 "small_pool_count": 8192, 00:19:20.843 "large_pool_count": 1024, 00:19:20.843 "small_bufsize": 8192, 00:19:20.843 "large_bufsize": 135168 00:19:20.843 } 00:19:20.843 } 00:19:20.843 ] 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "subsystem": "sock", 00:19:20.843 "config": [ 00:19:20.843 { 00:19:20.843 "method": "sock_set_default_impl", 00:19:20.843 "params": { 00:19:20.843 "impl_name": "posix" 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "sock_impl_set_options", 00:19:20.843 "params": { 00:19:20.843 "impl_name": "ssl", 00:19:20.843 "recv_buf_size": 4096, 00:19:20.843 "send_buf_size": 4096, 00:19:20.843 "enable_recv_pipe": true, 00:19:20.843 "enable_quickack": false, 00:19:20.843 "enable_placement_id": 0, 00:19:20.843 "enable_zerocopy_send_server": true, 00:19:20.843 "enable_zerocopy_send_client": false, 00:19:20.843 "zerocopy_threshold": 0, 00:19:20.843 "tls_version": 0, 00:19:20.843 "enable_ktls": false 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "sock_impl_set_options", 00:19:20.843 "params": { 00:19:20.843 "impl_name": "posix", 00:19:20.843 "recv_buf_size": 2097152, 00:19:20.843 "send_buf_size": 2097152, 00:19:20.843 "enable_recv_pipe": true, 00:19:20.843 "enable_quickack": false, 00:19:20.843 "enable_placement_id": 0, 00:19:20.843 "enable_zerocopy_send_server": true, 00:19:20.843 "enable_zerocopy_send_client": false, 00:19:20.843 "zerocopy_threshold": 0, 00:19:20.843 "tls_version": 0, 00:19:20.843 "enable_ktls": false 00:19:20.843 } 00:19:20.843 } 00:19:20.843 ] 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "subsystem": "vmd", 00:19:20.843 "config": [] 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "subsystem": "accel", 00:19:20.843 "config": [ 00:19:20.843 { 00:19:20.843 "method": "accel_set_options", 00:19:20.843 "params": { 00:19:20.843 "small_cache_size": 128, 00:19:20.843 "large_cache_size": 16, 00:19:20.843 "task_count": 2048, 00:19:20.843 "sequence_count": 2048, 00:19:20.843 "buf_count": 2048 00:19:20.843 } 00:19:20.843 } 00:19:20.843 ] 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "subsystem": "bdev", 00:19:20.843 "config": [ 00:19:20.843 { 00:19:20.843 "method": "bdev_set_options", 00:19:20.843 "params": { 00:19:20.843 "bdev_io_pool_size": 65535, 00:19:20.843 "bdev_io_cache_size": 256, 00:19:20.843 "bdev_auto_examine": true, 00:19:20.843 "iobuf_small_cache_size": 128, 00:19:20.843 "iobuf_large_cache_size": 16 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "bdev_raid_set_options", 00:19:20.843 "params": { 00:19:20.843 "process_window_size_kb": 1024 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "bdev_iscsi_set_options", 00:19:20.843 "params": { 00:19:20.843 "timeout_sec": 30 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "bdev_nvme_set_options", 00:19:20.843 "params": { 00:19:20.843 "action_on_timeout": "none", 00:19:20.843 "timeout_us": 0, 00:19:20.843 "timeout_admin_us": 0, 00:19:20.843 "keep_alive_timeout_ms": 10000, 00:19:20.843 "arbitration_burst": 0, 00:19:20.843 "low_priority_weight": 0, 00:19:20.843 "medium_priority_weight": 0, 00:19:20.843 "high_priority_weight": 0, 00:19:20.843 "nvme_adminq_poll_period_us": 10000, 00:19:20.843 "nvme_ioq_poll_period_us": 0, 00:19:20.843 "io_queue_requests": 512, 00:19:20.843 "delay_cmd_submit": true, 00:19:20.843 "transport_retry_count": 4, 00:19:20.843 "bdev_retry_count": 3, 00:19:20.843 "transport_ack_timeout": 0, 00:19:20.843 "ctrlr_loss_timeout_sec": 0, 00:19:20.843 "reconnect_delay_sec": 0, 00:19:20.843 "fast_io_fail_timeout_sec": 0, 00:19:20.843 "disable_auto_failback": false, 00:19:20.843 "generate_uuids": false, 00:19:20.843 "transport_tos": 0, 00:19:20.843 "nvme_error_stat": false, 00:19:20.843 "rdma_srq_size": 0, 00:19:20.843 "io_path_stat": false, 00:19:20.843 "allow_accel_sequence": false, 00:19:20.843 "rdma_max_cq_size": 0, 00:19:20.843 "rdma_cm_event_timeout_ms": 0, 00:19:20.843 "dhchap_digests": [ 00:19:20.843 "sha256", 00:19:20.843 "sha384", 00:19:20.843 "sha512" 00:19:20.843 ], 00:19:20.843 "dhchap_dhgroups": [ 00:19:20.843 "null", 00:19:20.843 "ffdhe2048", 00:19:20.843 "ffdhe3072", 00:19:20.843 "ffdhe4096", 00:19:20.843 "ffdhe6144", 00:19:20.843 "ffdhe8192" 00:19:20.843 ] 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.843 "method": "bdev_nvme_attach_controller", 00:19:20.843 "params": { 00:19:20.843 "name": "TLSTEST", 00:19:20.843 "trtype": "TCP", 00:19:20.843 "adrfam": "IPv4", 00:19:20.843 "traddr": "10.0.0.2", 00:19:20.843 "trsvcid": "4420", 00:19:20.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.843 "prchk_reftag": false, 00:19:20.843 "prchk_guard": false, 00:19:20.843 "ctrlr_loss_timeout_sec": 0, 00:19:20.843 "reconnect_delay_sec": 0, 00:19:20.843 "fast_io_fail_timeout_sec": 0, 00:19:20.843 "psk": "/tmp/tmp.BFi7x3oWHZ", 00:19:20.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.843 "hdgst": false, 00:19:20.843 "ddgst": false 00:19:20.843 } 00:19:20.843 }, 00:19:20.843 { 00:19:20.844 "method": "bdev_nvme_set_hotplug", 00:19:20.844 "params": { 00:19:20.844 "period_us": 100000, 00:19:20.844 "enable": false 00:19:20.844 } 00:19:20.844 }, 00:19:20.844 { 00:19:20.844 "method": "bdev_wait_for_examine" 00:19:20.844 } 00:19:20.844 ] 00:19:20.844 }, 00:19:20.844 { 00:19:20.844 "subsystem": "nbd", 00:19:20.844 "config": [] 00:19:20.844 } 00:19:20.844 ] 00:19:20.844 }' 00:19:20.844 [2024-07-12 17:27:39.469143] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:20.844 [2024-07-12 17:27:39.469184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100655 ] 00:19:20.844 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.844 [2024-07-12 17:27:39.519544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.844 [2024-07-12 17:27:39.591482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.101 [2024-07-12 17:27:39.732761] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.101 [2024-07-12 17:27:39.732836] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:21.666 17:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.666 17:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:21.666 17:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:21.666 Running I/O for 10 seconds... 00:19:31.636 00:19:31.636 Latency(us) 00:19:31.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.636 Verification LBA range: start 0x0 length 0x2000 00:19:31.636 TLSTESTn1 : 10.01 5471.69 21.37 0.00 0.00 23357.01 6496.61 71120.81 00:19:31.636 =================================================================================================================== 00:19:31.636 Total : 5471.69 21.37 0.00 0.00 23357.01 6496.61 71120.81 00:19:31.636 0 00:19:31.636 17:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.636 17:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4100655 00:19:31.636 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4100655 ']' 00:19:31.636 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4100655 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100655 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100655' 00:19:31.894 killing process with pid 4100655 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4100655 00:19:31.894 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.894 00:19:31.894 Latency(us) 00:19:31.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.894 =================================================================================================================== 00:19:31.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.894 [2024-07-12 17:27:50.460955] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:31.894 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4100655 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4100620 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4100620 ']' 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4100620 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.895 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100620 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100620' 00:19:32.153 killing process with pid 4100620 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4100620 00:19:32.153 [2024-07-12 17:27:50.687113] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4100620 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4102494 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4102494 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4102494 ']' 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.153 17:27:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.411 [2024-07-12 17:27:50.932537] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:32.411 [2024-07-12 17:27:50.932586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.411 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.411 [2024-07-12 17:27:50.988765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.411 [2024-07-12 17:27:51.061565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.411 [2024-07-12 17:27:51.061604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.411 [2024-07-12 17:27:51.061611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.411 [2024-07-12 17:27:51.061617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.411 [2024-07-12 17:27:51.061622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.411 [2024-07-12 17:27:51.061662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.976 17:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.976 17:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:32.976 17:27:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.976 17:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.976 17:27:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.235 17:27:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.235 17:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.BFi7x3oWHZ 00:19:33.235 17:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.BFi7x3oWHZ 00:19:33.235 17:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.235 [2024-07-12 17:27:51.929252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.235 17:27:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.494 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.494 [2024-07-12 17:27:52.254072] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.494 [2024-07-12 17:27:52.254258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.751 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.751 malloc0 00:19:33.751 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.009 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BFi7x3oWHZ 00:19:34.009 [2024-07-12 17:27:52.787808] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:34.270 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4102964 00:19:34.270 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:34.270 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.270 17:27:52 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4102964 /var/tmp/bdevperf.sock 00:19:34.270 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4102964 ']' 00:19:34.271 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.271 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.271 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.271 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.271 17:27:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.271 [2024-07-12 17:27:52.852396] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:34.271 [2024-07-12 17:27:52.852440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102964 ] 00:19:34.271 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.271 [2024-07-12 17:27:52.906616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.271 [2024-07-12 17:27:52.980618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.204 17:27:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.204 17:27:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:35.204 17:27:53 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BFi7x3oWHZ 00:19:35.204 17:27:53 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:35.204 [2024-07-12 17:27:53.972385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.462 nvme0n1 00:19:35.462 17:27:54 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.462 Running I/O for 1 seconds... 00:19:36.397 00:19:36.397 Latency(us) 00:19:36.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.397 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:36.397 Verification LBA range: start 0x0 length 0x2000 00:19:36.397 nvme0n1 : 1.02 4959.01 19.37 0.00 0.00 25580.89 4929.45 26328.38 00:19:36.397 =================================================================================================================== 00:19:36.397 Total : 4959.01 19.37 0.00 0.00 25580.89 4929.45 26328.38 00:19:36.397 0 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4102964 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4102964 ']' 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4102964 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102964 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102964' 00:19:36.655 killing process with pid 4102964 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4102964 00:19:36.655 Received shutdown signal, test time was about 1.000000 seconds 00:19:36.655 00:19:36.655 Latency(us) 00:19:36.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.655 =================================================================================================================== 00:19:36.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4102964 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4102494 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4102494 ']' 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4102494 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.655 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102494 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102494' 00:19:36.914 killing process with pid 4102494 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4102494 00:19:36.914 [2024-07-12 17:27:55.450554] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4102494 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4103442 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4103442 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4103442 ']' 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.914 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.915 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.915 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.915 17:27:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:36.915 17:27:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.173 [2024-07-12 17:27:55.696943] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:37.173 [2024-07-12 17:27:55.696989] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.173 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.173 [2024-07-12 17:27:55.753446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.173 [2024-07-12 17:27:55.832056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.173 [2024-07-12 17:27:55.832091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.173 [2024-07-12 17:27:55.832098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.173 [2024-07-12 17:27:55.832104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.173 [2024-07-12 17:27:55.832110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.173 [2024-07-12 17:27:55.832136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.739 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.739 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.739 17:27:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.739 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:37.739 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.997 [2024-07-12 17:27:56.531065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.997 malloc0 00:19:37.997 [2024-07-12 17:27:56.559385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.997 [2024-07-12 17:27:56.559557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4103506 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4103506 /var/tmp/bdevperf.sock 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4103506 ']' 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:37.997 17:27:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.997 [2024-07-12 17:27:56.631905] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:37.997 [2024-07-12 17:27:56.631943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103506 ] 00:19:37.997 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.997 [2024-07-12 17:27:56.686151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.997 [2024-07-12 17:27:56.765438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.931 17:27:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.931 17:27:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.931 17:27:57 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BFi7x3oWHZ 00:19:38.931 17:27:57 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:39.188 [2024-07-12 17:27:57.760627] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.188 nvme0n1 00:19:39.188 17:27:57 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.188 Running I/O for 1 seconds... 00:19:40.561 00:19:40.561 Latency(us) 00:19:40.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.561 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:40.561 Verification LBA range: start 0x0 length 0x2000 00:19:40.561 nvme0n1 : 1.02 5486.63 21.43 0.00 0.00 23127.17 6867.03 27354.16 00:19:40.561 =================================================================================================================== 00:19:40.561 Total : 5486.63 21.43 0.00 0.00 23127.17 6867.03 27354.16 00:19:40.561 0 00:19:40.561 17:27:58 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:40.561 17:27:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.561 17:27:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.561 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.561 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:40.561 "subsystems": [ 00:19:40.561 { 00:19:40.561 "subsystem": "keyring", 00:19:40.561 "config": [ 00:19:40.561 { 00:19:40.561 "method": "keyring_file_add_key", 00:19:40.561 "params": { 00:19:40.561 "name": "key0", 00:19:40.561 "path": "/tmp/tmp.BFi7x3oWHZ" 00:19:40.561 } 00:19:40.561 } 00:19:40.561 ] 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "subsystem": "iobuf", 00:19:40.561 "config": [ 00:19:40.561 { 00:19:40.561 "method": "iobuf_set_options", 00:19:40.561 "params": { 00:19:40.561 "small_pool_count": 8192, 00:19:40.561 "large_pool_count": 1024, 00:19:40.561 "small_bufsize": 8192, 00:19:40.561 "large_bufsize": 135168 00:19:40.561 } 00:19:40.561 } 00:19:40.561 ] 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "subsystem": "sock", 00:19:40.561 "config": [ 00:19:40.561 { 00:19:40.561 "method": "sock_set_default_impl", 00:19:40.561 "params": { 00:19:40.561 "impl_name": "posix" 00:19:40.561 } 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "method": "sock_impl_set_options", 00:19:40.561 "params": { 00:19:40.561 "impl_name": "ssl", 00:19:40.561 "recv_buf_size": 4096, 00:19:40.561 "send_buf_size": 4096, 00:19:40.561 "enable_recv_pipe": true, 00:19:40.561 "enable_quickack": false, 00:19:40.561 "enable_placement_id": 0, 00:19:40.561 "enable_zerocopy_send_server": true, 00:19:40.561 "enable_zerocopy_send_client": false, 00:19:40.561 "zerocopy_threshold": 0, 00:19:40.561 "tls_version": 0, 00:19:40.561 "enable_ktls": false 00:19:40.561 } 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "method": "sock_impl_set_options", 00:19:40.561 "params": { 00:19:40.561 "impl_name": "posix", 00:19:40.561 "recv_buf_size": 2097152, 00:19:40.561 "send_buf_size": 2097152, 00:19:40.561 "enable_recv_pipe": true, 00:19:40.561 "enable_quickack": false, 00:19:40.561 "enable_placement_id": 0, 00:19:40.561 "enable_zerocopy_send_server": true, 00:19:40.561 "enable_zerocopy_send_client": false, 00:19:40.561 "zerocopy_threshold": 0, 00:19:40.561 "tls_version": 0, 00:19:40.561 "enable_ktls": false 00:19:40.561 } 00:19:40.561 } 00:19:40.561 ] 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "subsystem": "vmd", 00:19:40.561 "config": [] 00:19:40.561 }, 00:19:40.561 { 00:19:40.561 "subsystem": "accel", 00:19:40.561 "config": [ 00:19:40.561 { 00:19:40.562 "method": "accel_set_options", 00:19:40.562 "params": { 00:19:40.562 "small_cache_size": 128, 00:19:40.562 "large_cache_size": 16, 00:19:40.562 "task_count": 2048, 00:19:40.562 "sequence_count": 2048, 00:19:40.562 "buf_count": 2048 00:19:40.562 } 00:19:40.562 } 00:19:40.562 ] 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "subsystem": "bdev", 00:19:40.562 "config": [ 00:19:40.562 { 00:19:40.562 "method": "bdev_set_options", 00:19:40.562 "params": { 00:19:40.562 "bdev_io_pool_size": 65535, 00:19:40.562 "bdev_io_cache_size": 256, 00:19:40.562 "bdev_auto_examine": true, 00:19:40.562 "iobuf_small_cache_size": 128, 00:19:40.562 "iobuf_large_cache_size": 16 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_raid_set_options", 00:19:40.562 "params": { 00:19:40.562 "process_window_size_kb": 1024 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_iscsi_set_options", 00:19:40.562 "params": { 00:19:40.562 "timeout_sec": 30 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_nvme_set_options", 00:19:40.562 "params": { 00:19:40.562 "action_on_timeout": "none", 00:19:40.562 "timeout_us": 0, 00:19:40.562 "timeout_admin_us": 0, 00:19:40.562 "keep_alive_timeout_ms": 10000, 00:19:40.562 "arbitration_burst": 0, 00:19:40.562 "low_priority_weight": 0, 00:19:40.562 "medium_priority_weight": 0, 00:19:40.562 "high_priority_weight": 0, 00:19:40.562 "nvme_adminq_poll_period_us": 10000, 00:19:40.562 "nvme_ioq_poll_period_us": 0, 00:19:40.562 "io_queue_requests": 0, 00:19:40.562 "delay_cmd_submit": true, 00:19:40.562 "transport_retry_count": 4, 00:19:40.562 "bdev_retry_count": 3, 00:19:40.562 "transport_ack_timeout": 0, 00:19:40.562 "ctrlr_loss_timeout_sec": 0, 00:19:40.562 "reconnect_delay_sec": 0, 00:19:40.562 "fast_io_fail_timeout_sec": 0, 00:19:40.562 "disable_auto_failback": false, 00:19:40.562 "generate_uuids": false, 00:19:40.562 "transport_tos": 0, 00:19:40.562 "nvme_error_stat": false, 00:19:40.562 "rdma_srq_size": 0, 00:19:40.562 "io_path_stat": false, 00:19:40.562 "allow_accel_sequence": false, 00:19:40.562 "rdma_max_cq_size": 0, 00:19:40.562 "rdma_cm_event_timeout_ms": 0, 00:19:40.562 "dhchap_digests": [ 00:19:40.562 "sha256", 00:19:40.562 "sha384", 00:19:40.562 "sha512" 00:19:40.562 ], 00:19:40.562 "dhchap_dhgroups": [ 00:19:40.562 "null", 00:19:40.562 "ffdhe2048", 00:19:40.562 "ffdhe3072", 00:19:40.562 "ffdhe4096", 00:19:40.562 "ffdhe6144", 00:19:40.562 "ffdhe8192" 00:19:40.562 ] 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_nvme_set_hotplug", 00:19:40.562 "params": { 00:19:40.562 "period_us": 100000, 00:19:40.562 "enable": false 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_malloc_create", 00:19:40.562 "params": { 00:19:40.562 "name": "malloc0", 00:19:40.562 "num_blocks": 8192, 00:19:40.562 "block_size": 4096, 00:19:40.562 "physical_block_size": 4096, 00:19:40.562 "uuid": "a7b92959-2bd1-4d8e-8ac6-260ed7b7f88b", 00:19:40.562 "optimal_io_boundary": 0 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "bdev_wait_for_examine" 00:19:40.562 } 00:19:40.562 ] 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "subsystem": "nbd", 00:19:40.562 "config": [] 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "subsystem": "scheduler", 00:19:40.562 "config": [ 00:19:40.562 { 00:19:40.562 "method": "framework_set_scheduler", 00:19:40.562 "params": { 00:19:40.562 "name": "static" 00:19:40.562 } 00:19:40.562 } 00:19:40.562 ] 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "subsystem": "nvmf", 00:19:40.562 "config": [ 00:19:40.562 { 00:19:40.562 "method": "nvmf_set_config", 00:19:40.562 "params": { 00:19:40.562 "discovery_filter": "match_any", 00:19:40.562 "admin_cmd_passthru": { 00:19:40.562 "identify_ctrlr": false 00:19:40.562 } 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_set_max_subsystems", 00:19:40.562 "params": { 00:19:40.562 "max_subsystems": 1024 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_set_crdt", 00:19:40.562 "params": { 00:19:40.562 "crdt1": 0, 00:19:40.562 "crdt2": 0, 00:19:40.562 "crdt3": 0 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_create_transport", 00:19:40.562 "params": { 00:19:40.562 "trtype": "TCP", 00:19:40.562 "max_queue_depth": 128, 00:19:40.562 "max_io_qpairs_per_ctrlr": 127, 00:19:40.562 "in_capsule_data_size": 4096, 00:19:40.562 "max_io_size": 131072, 00:19:40.562 "io_unit_size": 131072, 00:19:40.562 "max_aq_depth": 128, 00:19:40.562 "num_shared_buffers": 511, 00:19:40.562 "buf_cache_size": 4294967295, 00:19:40.562 "dif_insert_or_strip": false, 00:19:40.562 "zcopy": false, 00:19:40.562 "c2h_success": false, 00:19:40.562 "sock_priority": 0, 00:19:40.562 "abort_timeout_sec": 1, 00:19:40.562 "ack_timeout": 0, 00:19:40.562 "data_wr_pool_size": 0 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_create_subsystem", 00:19:40.562 "params": { 00:19:40.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.562 "allow_any_host": false, 00:19:40.562 "serial_number": "00000000000000000000", 00:19:40.562 "model_number": "SPDK bdev Controller", 00:19:40.562 "max_namespaces": 32, 00:19:40.562 "min_cntlid": 1, 00:19:40.562 "max_cntlid": 65519, 00:19:40.562 "ana_reporting": false 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_subsystem_add_host", 00:19:40.562 "params": { 00:19:40.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.562 "host": "nqn.2016-06.io.spdk:host1", 00:19:40.562 "psk": "key0" 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_subsystem_add_ns", 00:19:40.562 "params": { 00:19:40.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.562 "namespace": { 00:19:40.562 "nsid": 1, 00:19:40.562 "bdev_name": "malloc0", 00:19:40.562 "nguid": "A7B929592BD14D8E8AC6260ED7B7F88B", 00:19:40.562 "uuid": "a7b92959-2bd1-4d8e-8ac6-260ed7b7f88b", 00:19:40.562 "no_auto_visible": false 00:19:40.562 } 00:19:40.562 } 00:19:40.562 }, 00:19:40.562 { 00:19:40.562 "method": "nvmf_subsystem_add_listener", 00:19:40.562 "params": { 00:19:40.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.562 "listen_address": { 00:19:40.562 "trtype": "TCP", 00:19:40.562 "adrfam": "IPv4", 00:19:40.562 "traddr": "10.0.0.2", 00:19:40.562 "trsvcid": "4420" 00:19:40.562 }, 00:19:40.562 "secure_channel": true 00:19:40.562 } 00:19:40.562 } 00:19:40.562 ] 00:19:40.562 } 00:19:40.562 ] 00:19:40.562 }' 00:19:40.562 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:40.562 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:40.562 "subsystems": [ 00:19:40.562 { 00:19:40.562 "subsystem": "keyring", 00:19:40.562 "config": [ 00:19:40.562 { 00:19:40.563 "method": "keyring_file_add_key", 00:19:40.563 "params": { 00:19:40.563 "name": "key0", 00:19:40.563 "path": "/tmp/tmp.BFi7x3oWHZ" 00:19:40.563 } 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "iobuf", 00:19:40.563 "config": [ 00:19:40.563 { 00:19:40.563 "method": "iobuf_set_options", 00:19:40.563 "params": { 00:19:40.563 "small_pool_count": 8192, 00:19:40.563 "large_pool_count": 1024, 00:19:40.563 "small_bufsize": 8192, 00:19:40.563 "large_bufsize": 135168 00:19:40.563 } 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "sock", 00:19:40.563 "config": [ 00:19:40.563 { 00:19:40.563 "method": "sock_set_default_impl", 00:19:40.563 "params": { 00:19:40.563 "impl_name": "posix" 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "sock_impl_set_options", 00:19:40.563 "params": { 00:19:40.563 "impl_name": "ssl", 00:19:40.563 "recv_buf_size": 4096, 00:19:40.563 "send_buf_size": 4096, 00:19:40.563 "enable_recv_pipe": true, 00:19:40.563 "enable_quickack": false, 00:19:40.563 "enable_placement_id": 0, 00:19:40.563 "enable_zerocopy_send_server": true, 00:19:40.563 "enable_zerocopy_send_client": false, 00:19:40.563 "zerocopy_threshold": 0, 00:19:40.563 "tls_version": 0, 00:19:40.563 "enable_ktls": false 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "sock_impl_set_options", 00:19:40.563 "params": { 00:19:40.563 "impl_name": "posix", 00:19:40.563 "recv_buf_size": 2097152, 00:19:40.563 "send_buf_size": 2097152, 00:19:40.563 "enable_recv_pipe": true, 00:19:40.563 "enable_quickack": false, 00:19:40.563 "enable_placement_id": 0, 00:19:40.563 "enable_zerocopy_send_server": true, 00:19:40.563 "enable_zerocopy_send_client": false, 00:19:40.563 "zerocopy_threshold": 0, 00:19:40.563 "tls_version": 0, 00:19:40.563 "enable_ktls": false 00:19:40.563 } 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "vmd", 00:19:40.563 "config": [] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "accel", 00:19:40.563 "config": [ 00:19:40.563 { 00:19:40.563 "method": "accel_set_options", 00:19:40.563 "params": { 00:19:40.563 "small_cache_size": 128, 00:19:40.563 "large_cache_size": 16, 00:19:40.563 "task_count": 2048, 00:19:40.563 "sequence_count": 2048, 00:19:40.563 "buf_count": 2048 00:19:40.563 } 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "bdev", 00:19:40.563 "config": [ 00:19:40.563 { 00:19:40.563 "method": "bdev_set_options", 00:19:40.563 "params": { 00:19:40.563 "bdev_io_pool_size": 65535, 00:19:40.563 "bdev_io_cache_size": 256, 00:19:40.563 "bdev_auto_examine": true, 00:19:40.563 "iobuf_small_cache_size": 128, 00:19:40.563 "iobuf_large_cache_size": 16 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_raid_set_options", 00:19:40.563 "params": { 00:19:40.563 "process_window_size_kb": 1024 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_iscsi_set_options", 00:19:40.563 "params": { 00:19:40.563 "timeout_sec": 30 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_nvme_set_options", 00:19:40.563 "params": { 00:19:40.563 "action_on_timeout": "none", 00:19:40.563 "timeout_us": 0, 00:19:40.563 "timeout_admin_us": 0, 00:19:40.563 "keep_alive_timeout_ms": 10000, 00:19:40.563 "arbitration_burst": 0, 00:19:40.563 "low_priority_weight": 0, 00:19:40.563 "medium_priority_weight": 0, 00:19:40.563 "high_priority_weight": 0, 00:19:40.563 "nvme_adminq_poll_period_us": 10000, 00:19:40.563 "nvme_ioq_poll_period_us": 0, 00:19:40.563 "io_queue_requests": 512, 00:19:40.563 "delay_cmd_submit": true, 00:19:40.563 "transport_retry_count": 4, 00:19:40.563 "bdev_retry_count": 3, 00:19:40.563 "transport_ack_timeout": 0, 00:19:40.563 "ctrlr_loss_timeout_sec": 0, 00:19:40.563 "reconnect_delay_sec": 0, 00:19:40.563 "fast_io_fail_timeout_sec": 0, 00:19:40.563 "disable_auto_failback": false, 00:19:40.563 "generate_uuids": false, 00:19:40.563 "transport_tos": 0, 00:19:40.563 "nvme_error_stat": false, 00:19:40.563 "rdma_srq_size": 0, 00:19:40.563 "io_path_stat": false, 00:19:40.563 "allow_accel_sequence": false, 00:19:40.563 "rdma_max_cq_size": 0, 00:19:40.563 "rdma_cm_event_timeout_ms": 0, 00:19:40.563 "dhchap_digests": [ 00:19:40.563 "sha256", 00:19:40.563 "sha384", 00:19:40.563 "sha512" 00:19:40.563 ], 00:19:40.563 "dhchap_dhgroups": [ 00:19:40.563 "null", 00:19:40.563 "ffdhe2048", 00:19:40.563 "ffdhe3072", 00:19:40.563 "ffdhe4096", 00:19:40.563 "ffdhe6144", 00:19:40.563 "ffdhe8192" 00:19:40.563 ] 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_nvme_attach_controller", 00:19:40.563 "params": { 00:19:40.563 "name": "nvme0", 00:19:40.563 "trtype": "TCP", 00:19:40.563 "adrfam": "IPv4", 00:19:40.563 "traddr": "10.0.0.2", 00:19:40.563 "trsvcid": "4420", 00:19:40.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.563 "prchk_reftag": false, 00:19:40.563 "prchk_guard": false, 00:19:40.563 "ctrlr_loss_timeout_sec": 0, 00:19:40.563 "reconnect_delay_sec": 0, 00:19:40.563 "fast_io_fail_timeout_sec": 0, 00:19:40.563 "psk": "key0", 00:19:40.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.563 "hdgst": false, 00:19:40.563 "ddgst": false 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_nvme_set_hotplug", 00:19:40.563 "params": { 00:19:40.563 "period_us": 100000, 00:19:40.563 "enable": false 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_enable_histogram", 00:19:40.563 "params": { 00:19:40.563 "name": "nvme0n1", 00:19:40.563 "enable": true 00:19:40.563 } 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "method": "bdev_wait_for_examine" 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }, 00:19:40.563 { 00:19:40.563 "subsystem": "nbd", 00:19:40.563 "config": [] 00:19:40.563 } 00:19:40.563 ] 00:19:40.563 }' 00:19:40.563 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4103506 00:19:40.563 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4103506 ']' 00:19:40.563 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4103506 00:19:40.563 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.563 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103506 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103506' 00:19:40.822 killing process with pid 4103506 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4103506 00:19:40.822 Received shutdown signal, test time was about 1.000000 seconds 00:19:40.822 00:19:40.822 Latency(us) 00:19:40.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.822 =================================================================================================================== 00:19:40.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4103506 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4103442 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4103442 ']' 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4103442 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.822 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103442 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103442' 00:19:41.081 killing process with pid 4103442 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4103442 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4103442 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.081 17:27:59 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:41.081 "subsystems": [ 00:19:41.081 { 00:19:41.081 "subsystem": "keyring", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "keyring_file_add_key", 00:19:41.081 "params": { 00:19:41.081 "name": "key0", 00:19:41.081 "path": "/tmp/tmp.BFi7x3oWHZ" 00:19:41.081 } 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "iobuf", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "iobuf_set_options", 00:19:41.081 "params": { 00:19:41.081 "small_pool_count": 8192, 00:19:41.081 "large_pool_count": 1024, 00:19:41.081 "small_bufsize": 8192, 00:19:41.081 "large_bufsize": 135168 00:19:41.081 } 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "sock", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "sock_set_default_impl", 00:19:41.081 "params": { 00:19:41.081 "impl_name": "posix" 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "sock_impl_set_options", 00:19:41.081 "params": { 00:19:41.081 "impl_name": "ssl", 00:19:41.081 "recv_buf_size": 4096, 00:19:41.081 "send_buf_size": 4096, 00:19:41.081 "enable_recv_pipe": true, 00:19:41.081 "enable_quickack": false, 00:19:41.081 "enable_placement_id": 0, 00:19:41.081 "enable_zerocopy_send_server": true, 00:19:41.081 "enable_zerocopy_send_client": false, 00:19:41.081 "zerocopy_threshold": 0, 00:19:41.081 "tls_version": 0, 00:19:41.081 "enable_ktls": false 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "sock_impl_set_options", 00:19:41.081 "params": { 00:19:41.081 "impl_name": "posix", 00:19:41.081 "recv_buf_size": 2097152, 00:19:41.081 "send_buf_size": 2097152, 00:19:41.081 "enable_recv_pipe": true, 00:19:41.081 "enable_quickack": false, 00:19:41.081 "enable_placement_id": 0, 00:19:41.081 "enable_zerocopy_send_server": true, 00:19:41.081 "enable_zerocopy_send_client": false, 00:19:41.081 "zerocopy_threshold": 0, 00:19:41.081 "tls_version": 0, 00:19:41.081 "enable_ktls": false 00:19:41.081 } 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "vmd", 00:19:41.081 "config": [] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "accel", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "accel_set_options", 00:19:41.081 "params": { 00:19:41.081 "small_cache_size": 128, 00:19:41.081 "large_cache_size": 16, 00:19:41.081 "task_count": 2048, 00:19:41.081 "sequence_count": 2048, 00:19:41.081 "buf_count": 2048 00:19:41.081 } 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "bdev", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "bdev_set_options", 00:19:41.081 "params": { 00:19:41.081 "bdev_io_pool_size": 65535, 00:19:41.081 "bdev_io_cache_size": 256, 00:19:41.081 "bdev_auto_examine": true, 00:19:41.081 "iobuf_small_cache_size": 128, 00:19:41.081 "iobuf_large_cache_size": 16 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_raid_set_options", 00:19:41.081 "params": { 00:19:41.081 "process_window_size_kb": 1024 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_iscsi_set_options", 00:19:41.081 "params": { 00:19:41.081 "timeout_sec": 30 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_nvme_set_options", 00:19:41.081 "params": { 00:19:41.081 "action_on_timeout": "none", 00:19:41.081 "timeout_us": 0, 00:19:41.081 "timeout_admin_us": 0, 00:19:41.081 "keep_alive_timeout_ms": 10000, 00:19:41.081 "arbitration_burst": 0, 00:19:41.081 "low_priority_weight": 0, 00:19:41.081 "medium_priority_weight": 0, 00:19:41.081 "high_priority_weight": 0, 00:19:41.081 "nvme_adminq_poll_period_us": 10000, 00:19:41.081 "nvme_ioq_poll_period_us": 0, 00:19:41.081 "io_queue_requests": 0, 00:19:41.081 "delay_cmd_submit": true, 00:19:41.081 "transport_retry_count": 4, 00:19:41.081 "bdev_retry_count": 3, 00:19:41.081 "transport_ack_timeout": 0, 00:19:41.081 "ctrlr_loss_timeout_sec": 0, 00:19:41.081 "reconnect_delay_sec": 0, 00:19:41.081 "fast_io_fail_timeout_sec": 0, 00:19:41.081 "disable_auto_failback": false, 00:19:41.081 "generate_uuids": false, 00:19:41.081 "transport_tos": 0, 00:19:41.081 "nvme_error_stat": false, 00:19:41.081 "rdma_srq_size": 0, 00:19:41.081 "io_path_stat": false, 00:19:41.081 "allow_accel_sequence": false, 00:19:41.081 "rdma_max_cq_size": 0, 00:19:41.081 "rdma_cm_event_timeout_ms": 0, 00:19:41.081 "dhchap_digests": [ 00:19:41.081 "sha256", 00:19:41.081 "sha384", 00:19:41.081 "sha512" 00:19:41.081 ], 00:19:41.081 "dhchap_dhgroups": [ 00:19:41.081 "null", 00:19:41.081 "ffdhe2048", 00:19:41.081 "ffdhe3072", 00:19:41.081 "ffdhe4096", 00:19:41.081 "ffdhe6144", 00:19:41.081 "ffdhe8192" 00:19:41.081 ] 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_nvme_set_hotplug", 00:19:41.081 "params": { 00:19:41.081 "period_us": 100000, 00:19:41.081 "enable": false 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_malloc_create", 00:19:41.081 "params": { 00:19:41.081 "name": "malloc0", 00:19:41.081 "num_blocks": 8192, 00:19:41.081 "block_size": 4096, 00:19:41.081 "physical_block_size": 4096, 00:19:41.081 "uuid": "a7b92959-2bd1-4d8e-8ac6-260ed7b7f88b", 00:19:41.081 "optimal_io_boundary": 0 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "bdev_wait_for_examine" 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "nbd", 00:19:41.081 "config": [] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "scheduler", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "framework_set_scheduler", 00:19:41.081 "params": { 00:19:41.081 "name": "static" 00:19:41.081 } 00:19:41.081 } 00:19:41.081 ] 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "subsystem": "nvmf", 00:19:41.081 "config": [ 00:19:41.081 { 00:19:41.081 "method": "nvmf_set_config", 00:19:41.081 "params": { 00:19:41.081 "discovery_filter": "match_any", 00:19:41.081 "admin_cmd_passthru": { 00:19:41.081 "identify_ctrlr": false 00:19:41.081 } 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "nvmf_set_max_subsystems", 00:19:41.081 "params": { 00:19:41.081 "max_subsystems": 1024 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "nvmf_set_crdt", 00:19:41.081 "params": { 00:19:41.081 "crdt1": 0, 00:19:41.081 "crdt2": 0, 00:19:41.081 "crdt3": 0 00:19:41.081 } 00:19:41.081 }, 00:19:41.081 { 00:19:41.081 "method": "nvmf_create_transport", 00:19:41.081 "params": { 00:19:41.081 "trtype": "TCP", 00:19:41.081 "max_queue_depth": 128, 00:19:41.082 "max_io_qpairs_per_ctrlr": 127, 00:19:41.082 "in_capsule_data_size": 4096, 00:19:41.082 "max_io_size": 131072, 00:19:41.082 "io_unit_size": 131072, 00:19:41.082 "max_aq_depth": 128, 00:19:41.082 "num_shared_buffers": 511, 00:19:41.082 "buf_cache_size": 4294967295, 00:19:41.082 "dif_insert_or_strip": false, 00:19:41.082 "zcopy": false, 00:19:41.082 "c2h_success": false, 00:19:41.082 "sock_priority": 0, 00:19:41.082 "abort_timeout_sec": 1, 00:19:41.082 "ack_timeout": 0, 00:19:41.082 "data_wr_pool_size": 0 00:19:41.082 } 00:19:41.082 }, 00:19:41.082 { 00:19:41.082 "method": "nvmf_create_subsystem", 00:19:41.082 "params": { 00:19:41.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.082 "allow_any_host": false, 00:19:41.082 "serial_number": "00000000000000000000", 00:19:41.082 "model_number": "SPDK bdev Controller", 00:19:41.082 "max_namespaces": 32, 00:19:41.082 "min_cntlid": 1, 00:19:41.082 "max_cntlid": 65519, 00:19:41.082 "ana_reporting": false 00:19:41.082 } 00:19:41.082 }, 00:19:41.082 { 00:19:41.082 "method": "nvmf_subsystem_add_host", 00:19:41.082 "params": { 00:19:41.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.082 "host": "nqn.2016-06.io.spdk:host1", 00:19:41.082 "psk": "key0" 00:19:41.082 } 00:19:41.082 }, 00:19:41.082 { 00:19:41.082 "method": "nvmf_subsystem_add_ns", 00:19:41.082 "params": { 00:19:41.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.082 "namespace": { 00:19:41.082 "nsid": 1, 00:19:41.082 "bdev_name": "malloc0", 00:19:41.082 "nguid": "A7B929592BD14D8E8AC6260ED7B7F88B", 00:19:41.082 "uuid": "a7b92959-2bd1-4d8e-8ac6-260ed7b7f88b", 00:19:41.082 "no_auto_visible": false 00:19:41.082 } 00:19:41.082 } 00:19:41.082 }, 00:19:41.082 { 00:19:41.082 "method": "nvmf_subsystem_add_listener", 00:19:41.082 "params": { 00:19:41.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.082 "listen_address": { 00:19:41.082 "trtype": "TCP", 00:19:41.082 "adrfam": "IPv4", 00:19:41.082 "traddr": "10.0.0.2", 00:19:41.082 "trsvcid": "4420" 00:19:41.082 }, 00:19:41.082 "secure_channel": true 00:19:41.082 } 00:19:41.082 } 00:19:41.082 ] 00:19:41.082 } 00:19:41.082 ] 00:19:41.082 }' 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4104164 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4104164 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4104164 ']' 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:41.082 17:27:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.082 [2024-07-12 17:27:59.847888] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:41.082 [2024-07-12 17:27:59.847932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.340 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.340 [2024-07-12 17:27:59.904935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.340 [2024-07-12 17:27:59.983573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.340 [2024-07-12 17:27:59.983608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.340 [2024-07-12 17:27:59.983615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.340 [2024-07-12 17:27:59.983621] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.340 [2024-07-12 17:27:59.983626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.340 [2024-07-12 17:27:59.983696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.598 [2024-07-12 17:28:00.195762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.598 [2024-07-12 17:28:00.227790] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.598 [2024-07-12 17:28:00.235591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4104198 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4104198 /var/tmp/bdevperf.sock 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4104198 ']' 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:42.164 17:28:00 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:42.164 "subsystems": [ 00:19:42.164 { 00:19:42.164 "subsystem": "keyring", 00:19:42.164 "config": [ 00:19:42.164 { 00:19:42.164 "method": "keyring_file_add_key", 00:19:42.164 "params": { 00:19:42.164 "name": "key0", 00:19:42.164 "path": "/tmp/tmp.BFi7x3oWHZ" 00:19:42.164 } 00:19:42.164 } 00:19:42.164 ] 00:19:42.164 }, 00:19:42.164 { 00:19:42.164 "subsystem": "iobuf", 00:19:42.164 "config": [ 00:19:42.164 { 00:19:42.164 "method": "iobuf_set_options", 00:19:42.164 "params": { 00:19:42.164 "small_pool_count": 8192, 00:19:42.164 "large_pool_count": 1024, 00:19:42.164 "small_bufsize": 8192, 00:19:42.164 "large_bufsize": 135168 00:19:42.164 } 00:19:42.164 } 00:19:42.164 ] 00:19:42.164 }, 00:19:42.164 { 00:19:42.164 "subsystem": "sock", 00:19:42.164 "config": [ 00:19:42.164 { 00:19:42.164 "method": "sock_set_default_impl", 00:19:42.164 "params": { 00:19:42.164 "impl_name": "posix" 00:19:42.164 } 00:19:42.164 }, 00:19:42.164 { 00:19:42.164 "method": "sock_impl_set_options", 00:19:42.164 "params": { 00:19:42.164 "impl_name": "ssl", 00:19:42.164 "recv_buf_size": 4096, 00:19:42.164 "send_buf_size": 4096, 00:19:42.164 "enable_recv_pipe": true, 00:19:42.164 "enable_quickack": false, 00:19:42.164 "enable_placement_id": 0, 00:19:42.164 "enable_zerocopy_send_server": true, 00:19:42.164 "enable_zerocopy_send_client": false, 00:19:42.164 "zerocopy_threshold": 0, 00:19:42.164 "tls_version": 0, 00:19:42.164 "enable_ktls": false 00:19:42.164 } 00:19:42.164 }, 00:19:42.164 { 00:19:42.164 "method": "sock_impl_set_options", 00:19:42.164 "params": { 00:19:42.164 "impl_name": "posix", 00:19:42.164 "recv_buf_size": 2097152, 00:19:42.164 "send_buf_size": 2097152, 00:19:42.164 "enable_recv_pipe": true, 00:19:42.164 "enable_quickack": false, 00:19:42.164 "enable_placement_id": 0, 00:19:42.164 "enable_zerocopy_send_server": true, 00:19:42.164 "enable_zerocopy_send_client": false, 00:19:42.164 "zerocopy_threshold": 0, 00:19:42.164 "tls_version": 0, 00:19:42.164 "enable_ktls": false 00:19:42.164 } 00:19:42.165 } 00:19:42.165 ] 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "subsystem": "vmd", 00:19:42.165 "config": [] 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "subsystem": "accel", 00:19:42.165 "config": [ 00:19:42.165 { 00:19:42.165 "method": "accel_set_options", 00:19:42.165 "params": { 00:19:42.165 "small_cache_size": 128, 00:19:42.165 "large_cache_size": 16, 00:19:42.165 "task_count": 2048, 00:19:42.165 "sequence_count": 2048, 00:19:42.165 "buf_count": 2048 00:19:42.165 } 00:19:42.165 } 00:19:42.165 ] 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "subsystem": "bdev", 00:19:42.165 "config": [ 00:19:42.165 { 00:19:42.165 "method": "bdev_set_options", 00:19:42.165 "params": { 00:19:42.165 "bdev_io_pool_size": 65535, 00:19:42.165 "bdev_io_cache_size": 256, 00:19:42.165 "bdev_auto_examine": true, 00:19:42.165 "iobuf_small_cache_size": 128, 00:19:42.165 "iobuf_large_cache_size": 16 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_raid_set_options", 00:19:42.165 "params": { 00:19:42.165 "process_window_size_kb": 1024 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_iscsi_set_options", 00:19:42.165 "params": { 00:19:42.165 "timeout_sec": 30 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_nvme_set_options", 00:19:42.165 "params": { 00:19:42.165 "action_on_timeout": "none", 00:19:42.165 "timeout_us": 0, 00:19:42.165 "timeout_admin_us": 0, 00:19:42.165 "keep_alive_timeout_ms": 10000, 00:19:42.165 "arbitration_burst": 0, 00:19:42.165 "low_priority_weight": 0, 00:19:42.165 "medium_priority_weight": 0, 00:19:42.165 "high_priority_weight": 0, 00:19:42.165 "nvme_adminq_poll_period_us": 10000, 00:19:42.165 "nvme_ioq_poll_period_us": 0, 00:19:42.165 "io_queue_requests": 512, 00:19:42.165 "delay_cmd_submit": true, 00:19:42.165 "transport_retry_count": 4, 00:19:42.165 "bdev_retry_count": 3, 00:19:42.165 "transport_ack_timeout": 0, 00:19:42.165 "ctrlr_loss_timeout_sec": 0, 00:19:42.165 "reconnect_delay_sec": 0, 00:19:42.165 "fast_io_fail_timeout_sec": 0, 00:19:42.165 "disable_auto_failback": false, 00:19:42.165 "generate_uuids": false, 00:19:42.165 "transport_tos": 0, 00:19:42.165 "nvme_error_stat": false, 00:19:42.165 "rdma_srq_size": 0, 00:19:42.165 "io_path_stat": false, 00:19:42.165 "allow_accel_sequence": false, 00:19:42.165 "rdma_max_cq_size": 0, 00:19:42.165 "rdma_cm_event_timeout_ms": 0, 00:19:42.165 "dhchap_digests": [ 00:19:42.165 "sha256", 00:19:42.165 "sha384", 00:19:42.165 "sha512" 00:19:42.165 ], 00:19:42.165 "dhchap_dhgroups": [ 00:19:42.165 "null", 00:19:42.165 "ffdhe2048", 00:19:42.165 "ffdhe3072", 00:19:42.165 "ffdhe4096", 00:19:42.165 "ffdhe6144", 00:19:42.165 "ffdhe8192" 00:19:42.165 ] 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_nvme_attach_controller", 00:19:42.165 "params": { 00:19:42.165 "name": "nvme0", 00:19:42.165 "trtype": "TCP", 00:19:42.165 "adrfam": "IPv4", 00:19:42.165 "traddr": "10.0.0.2", 00:19:42.165 "trsvcid": "4420", 00:19:42.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.165 "prchk_reftag": false, 00:19:42.165 "prchk_guard": false, 00:19:42.165 "ctrlr_loss_timeout_sec": 0, 00:19:42.165 "reconnect_delay_sec": 0, 00:19:42.165 "fast_io_fail_timeout_sec": 0, 00:19:42.165 "psk": "key0", 00:19:42.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.165 "hdgst": false, 00:19:42.165 "ddgst": false 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_nvme_set_hotplug", 00:19:42.165 "params": { 00:19:42.165 "period_us": 100000, 00:19:42.165 "enable": false 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_enable_histogram", 00:19:42.165 "params": { 00:19:42.165 "name": "nvme0n1", 00:19:42.165 "enable": true 00:19:42.165 } 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "method": "bdev_wait_for_examine" 00:19:42.165 } 00:19:42.165 ] 00:19:42.165 }, 00:19:42.165 { 00:19:42.165 "subsystem": "nbd", 00:19:42.165 "config": [] 00:19:42.165 } 00:19:42.165 ] 00:19:42.165 }' 00:19:42.165 [2024-07-12 17:28:00.712510] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:42.165 [2024-07-12 17:28:00.712553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104198 ] 00:19:42.165 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.165 [2024-07-12 17:28:00.767160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.165 [2024-07-12 17:28:00.839056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.423 [2024-07-12 17:28:00.989892] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.987 17:28:01 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.244 Running I/O for 1 seconds... 00:19:44.212 00:19:44.212 Latency(us) 00:19:44.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:44.212 Verification LBA range: start 0x0 length 0x2000 00:19:44.212 nvme0n1 : 1.01 5234.74 20.45 0.00 0.00 24267.32 4986.43 30089.57 00:19:44.212 =================================================================================================================== 00:19:44.212 Total : 5234.74 20.45 0.00 0.00 24267.32 4986.43 30089.57 00:19:44.212 0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:44.212 nvmf_trace.0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4104198 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4104198 ']' 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4104198 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104198 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104198' 00:19:44.212 killing process with pid 4104198 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4104198 00:19:44.212 Received shutdown signal, test time was about 1.000000 seconds 00:19:44.212 00:19:44.212 Latency(us) 00:19:44.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.212 =================================================================================================================== 00:19:44.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.212 17:28:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4104198 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.481 rmmod nvme_tcp 00:19:44.481 rmmod nvme_fabrics 00:19:44.481 rmmod nvme_keyring 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4104164 ']' 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4104164 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4104164 ']' 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4104164 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104164 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104164' 00:19:44.481 killing process with pid 4104164 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4104164 00:19:44.481 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4104164 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.739 17:28:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.264 17:28:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.264 17:28:05 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.N0TMusoiJJ /tmp/tmp.kZjjWI8MVT /tmp/tmp.BFi7x3oWHZ 00:19:47.264 00:19:47.264 real 1m24.747s 00:19:47.264 user 2m11.723s 00:19:47.264 sys 0m27.964s 00:19:47.264 17:28:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.264 17:28:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.264 ************************************ 00:19:47.264 END TEST nvmf_tls 00:19:47.264 ************************************ 00:19:47.264 17:28:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:47.264 17:28:05 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:47.264 17:28:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:47.264 17:28:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.264 17:28:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:47.264 ************************************ 00:19:47.264 START TEST nvmf_fips 00:19:47.264 ************************************ 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:47.264 * Looking for test storage... 00:19:47.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.264 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:47.265 Error setting digest 00:19:47.265 00A2741E6A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:47.265 00A2741E6A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.265 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.266 17:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.527 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.527 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.527 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.527 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:52.527 00:19:52.527 --- 10.0.0.2 ping statistics --- 00:19:52.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.527 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:52.527 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:19:52.527 00:19:52.527 --- 10.0.0.1 ping statistics --- 00:19:52.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.528 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:52.528 17:28:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4108184 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4108184 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4108184 ']' 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.528 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:52.528 [2024-07-12 17:28:11.099006] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:52.528 [2024-07-12 17:28:11.099052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.528 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.528 [2024-07-12 17:28:11.156095] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.528 [2024-07-12 17:28:11.232917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.528 [2024-07-12 17:28:11.232950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.528 [2024-07-12 17:28:11.232958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.528 [2024-07-12 17:28:11.232963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.528 [2024-07-12 17:28:11.232968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.528 [2024-07-12 17:28:11.232985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:53.462 17:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.462 [2024-07-12 17:28:12.072299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.462 [2024-07-12 17:28:12.088289] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.462 [2024-07-12 17:28:12.088467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.462 [2024-07-12 17:28:12.116464] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.462 malloc0 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4108262 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4108262 /var/tmp/bdevperf.sock 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4108262 ']' 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.462 17:28:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:53.462 [2024-07-12 17:28:12.207427] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:19:53.462 [2024-07-12 17:28:12.207473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108262 ] 00:19:53.462 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.720 [2024-07-12 17:28:12.257555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.720 [2024-07-12 17:28:12.335914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.285 17:28:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.285 17:28:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:54.285 17:28:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:54.542 [2024-07-12 17:28:13.153439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.542 [2024-07-12 17:28:13.153513] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.542 TLSTESTn1 00:19:54.542 17:28:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.799 Running I/O for 10 seconds... 00:20:04.764 00:20:04.764 Latency(us) 00:20:04.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.764 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.764 Verification LBA range: start 0x0 length 0x2000 00:20:04.764 TLSTESTn1 : 10.02 5403.53 21.11 0.00 0.00 23647.40 5014.93 45134.36 00:20:04.764 =================================================================================================================== 00:20:04.764 Total : 5403.53 21.11 0.00 0.00 23647.40 5014.93 45134.36 00:20:04.764 0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:04.764 nvmf_trace.0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4108262 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4108262 ']' 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4108262 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108262 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:04.764 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:04.765 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108262' 00:20:04.765 killing process with pid 4108262 00:20:04.765 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4108262 00:20:04.765 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.765 00:20:04.765 Latency(us) 00:20:04.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.765 =================================================================================================================== 00:20:04.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.765 [2024-07-12 17:28:23.521256] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.765 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4108262 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.022 rmmod nvme_tcp 00:20:05.022 rmmod nvme_fabrics 00:20:05.022 rmmod nvme_keyring 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4108184 ']' 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4108184 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4108184 ']' 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4108184 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.022 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108184 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108184' 00:20:05.280 killing process with pid 4108184 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4108184 00:20:05.280 [2024-07-12 17:28:23.805186] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4108184 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.280 17:28:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.813 17:28:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.813 17:28:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:07.813 00:20:07.813 real 0m20.520s 00:20:07.813 user 0m22.940s 00:20:07.813 sys 0m8.417s 00:20:07.813 17:28:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.813 17:28:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:07.813 ************************************ 00:20:07.813 END TEST nvmf_fips 00:20:07.813 ************************************ 00:20:07.813 17:28:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:07.813 17:28:26 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:07.813 17:28:26 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:07.813 17:28:26 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:07.813 17:28:26 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:07.813 17:28:26 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.813 17:28:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:13.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:13.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:13.083 Found net devices under 0000:86:00.0: cvl_0_0 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:13.083 Found net devices under 0000:86:00.1: cvl_0_1 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:13.083 17:28:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:13.083 17:28:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:13.083 17:28:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.083 17:28:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.083 ************************************ 00:20:13.083 START TEST nvmf_perf_adq 00:20:13.083 ************************************ 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:13.083 * Looking for test storage... 00:20:13.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.083 17:28:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.084 17:28:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.349 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:18.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:18.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:18.350 Found net devices under 0000:86:00.0: cvl_0_0 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:18.350 Found net devices under 0000:86:00.1: cvl_0_1 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:18.350 17:28:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:18.916 17:28:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:20.818 17:28:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.165 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.166 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.166 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.166 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:20:26.166 00:20:26.166 --- 10.0.0.2 ping statistics --- 00:20:26.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.166 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:26.166 00:20:26.166 --- 10.0.0.1 ping statistics --- 00:20:26.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.166 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4118036 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4118036 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4118036 ']' 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.166 17:28:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.166 [2024-07-12 17:28:44.895709] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:20:26.166 [2024-07-12 17:28:44.895752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.166 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.423 [2024-07-12 17:28:44.956501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.423 [2024-07-12 17:28:45.042977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.424 [2024-07-12 17:28:45.043013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.424 [2024-07-12 17:28:45.043020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.424 [2024-07-12 17:28:45.043029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.424 [2024-07-12 17:28:45.043034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.424 [2024-07-12 17:28:45.043082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.424 [2024-07-12 17:28:45.043178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.424 [2024-07-12 17:28:45.043196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.424 [2024-07-12 17:28:45.043200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.988 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 [2024-07-12 17:28:45.892272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 Malloc1 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.246 [2024-07-12 17:28:45.939811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4118190 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:27.246 17:28:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:27.246 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:29.771 "tick_rate": 2300000000, 00:20:29.771 "poll_groups": [ 00:20:29.771 { 00:20:29.771 "name": "nvmf_tgt_poll_group_000", 00:20:29.771 "admin_qpairs": 1, 00:20:29.771 "io_qpairs": 1, 00:20:29.771 "current_admin_qpairs": 1, 00:20:29.771 "current_io_qpairs": 1, 00:20:29.771 "pending_bdev_io": 0, 00:20:29.771 "completed_nvme_io": 20822, 00:20:29.771 "transports": [ 00:20:29.771 { 00:20:29.771 "trtype": "TCP" 00:20:29.771 } 00:20:29.771 ] 00:20:29.771 }, 00:20:29.771 { 00:20:29.771 "name": "nvmf_tgt_poll_group_001", 00:20:29.771 "admin_qpairs": 0, 00:20:29.771 "io_qpairs": 1, 00:20:29.771 "current_admin_qpairs": 0, 00:20:29.771 "current_io_qpairs": 1, 00:20:29.771 "pending_bdev_io": 0, 00:20:29.771 "completed_nvme_io": 20975, 00:20:29.771 "transports": [ 00:20:29.771 { 00:20:29.771 "trtype": "TCP" 00:20:29.771 } 00:20:29.771 ] 00:20:29.771 }, 00:20:29.771 { 00:20:29.771 "name": "nvmf_tgt_poll_group_002", 00:20:29.771 "admin_qpairs": 0, 00:20:29.771 "io_qpairs": 1, 00:20:29.771 "current_admin_qpairs": 0, 00:20:29.771 "current_io_qpairs": 1, 00:20:29.771 "pending_bdev_io": 0, 00:20:29.771 "completed_nvme_io": 20882, 00:20:29.771 "transports": [ 00:20:29.771 { 00:20:29.771 "trtype": "TCP" 00:20:29.771 } 00:20:29.771 ] 00:20:29.771 }, 00:20:29.771 { 00:20:29.771 "name": "nvmf_tgt_poll_group_003", 00:20:29.771 "admin_qpairs": 0, 00:20:29.771 "io_qpairs": 1, 00:20:29.771 "current_admin_qpairs": 0, 00:20:29.771 "current_io_qpairs": 1, 00:20:29.771 "pending_bdev_io": 0, 00:20:29.771 "completed_nvme_io": 20826, 00:20:29.771 "transports": [ 00:20:29.771 { 00:20:29.771 "trtype": "TCP" 00:20:29.771 } 00:20:29.771 ] 00:20:29.771 } 00:20:29.771 ] 00:20:29.771 }' 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:29.771 17:28:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:29.771 17:28:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:29.771 17:28:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:29.771 17:28:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4118190 00:20:37.942 Initializing NVMe Controllers 00:20:37.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:37.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:37.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:37.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:37.942 Initialization complete. Launching workers. 00:20:37.942 ======================================================== 00:20:37.942 Latency(us) 00:20:37.942 Device Information : IOPS MiB/s Average min max 00:20:37.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10960.10 42.81 5839.69 2580.88 9419.90 00:20:37.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11047.30 43.15 5795.40 2128.72 9040.24 00:20:37.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11015.40 43.03 5811.21 2489.08 13067.49 00:20:37.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10970.90 42.86 5835.52 2306.57 9338.64 00:20:37.942 ======================================================== 00:20:37.942 Total : 43993.69 171.85 5820.40 2128.72 13067.49 00:20:37.942 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.942 rmmod nvme_tcp 00:20:37.942 rmmod nvme_fabrics 00:20:37.942 rmmod nvme_keyring 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4118036 ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4118036 ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118036' 00:20:37.942 killing process with pid 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4118036 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.942 17:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.845 17:28:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:39.845 17:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:39.845 17:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:40.781 17:28:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:43.314 17:29:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.601 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:48.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:48.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:48.602 Found net devices under 0000:86:00.0: cvl_0_0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:48.602 Found net devices under 0000:86:00.1: cvl_0_1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:20:48.602 00:20:48.602 --- 10.0.0.2 ping statistics --- 00:20:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.602 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:20:48.602 00:20:48.602 --- 10.0.0.1 ping statistics --- 00:20:48.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.602 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:48.602 net.core.busy_poll = 1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:48.602 net.core.busy_read = 1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:48.602 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4121975 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4121975 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4121975 ']' 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.603 17:29:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.603 [2024-07-12 17:29:07.023236] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:20:48.603 [2024-07-12 17:29:07.023285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.603 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.603 [2024-07-12 17:29:07.080421] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.603 [2024-07-12 17:29:07.155926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.603 [2024-07-12 17:29:07.155965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.603 [2024-07-12 17:29:07.155972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.603 [2024-07-12 17:29:07.155978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.603 [2024-07-12 17:29:07.155982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.603 [2024-07-12 17:29:07.156046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.603 [2024-07-12 17:29:07.156140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.603 [2024-07-12 17:29:07.156209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.603 [2024-07-12 17:29:07.156211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.169 17:29:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 [2024-07-12 17:29:08.025244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 Malloc1 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.427 [2024-07-12 17:29:08.072831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4122229 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:49.427 17:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:49.427 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.324 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:51.324 17:29:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.324 17:29:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.324 17:29:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.582 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:51.582 "tick_rate": 2300000000, 00:20:51.582 "poll_groups": [ 00:20:51.582 { 00:20:51.582 "name": "nvmf_tgt_poll_group_000", 00:20:51.582 "admin_qpairs": 1, 00:20:51.582 "io_qpairs": 3, 00:20:51.582 "current_admin_qpairs": 1, 00:20:51.582 "current_io_qpairs": 3, 00:20:51.582 "pending_bdev_io": 0, 00:20:51.582 "completed_nvme_io": 30131, 00:20:51.582 "transports": [ 00:20:51.582 { 00:20:51.582 "trtype": "TCP" 00:20:51.582 } 00:20:51.582 ] 00:20:51.582 }, 00:20:51.582 { 00:20:51.582 "name": "nvmf_tgt_poll_group_001", 00:20:51.582 "admin_qpairs": 0, 00:20:51.582 "io_qpairs": 1, 00:20:51.582 "current_admin_qpairs": 0, 00:20:51.582 "current_io_qpairs": 1, 00:20:51.582 "pending_bdev_io": 0, 00:20:51.582 "completed_nvme_io": 29046, 00:20:51.582 "transports": [ 00:20:51.582 { 00:20:51.582 "trtype": "TCP" 00:20:51.582 } 00:20:51.582 ] 00:20:51.582 }, 00:20:51.582 { 00:20:51.582 "name": "nvmf_tgt_poll_group_002", 00:20:51.582 "admin_qpairs": 0, 00:20:51.582 "io_qpairs": 0, 00:20:51.582 "current_admin_qpairs": 0, 00:20:51.582 "current_io_qpairs": 0, 00:20:51.582 "pending_bdev_io": 0, 00:20:51.582 "completed_nvme_io": 0, 00:20:51.582 "transports": [ 00:20:51.582 { 00:20:51.582 "trtype": "TCP" 00:20:51.582 } 00:20:51.582 ] 00:20:51.582 }, 00:20:51.582 { 00:20:51.582 "name": "nvmf_tgt_poll_group_003", 00:20:51.582 "admin_qpairs": 0, 00:20:51.582 "io_qpairs": 0, 00:20:51.582 "current_admin_qpairs": 0, 00:20:51.583 "current_io_qpairs": 0, 00:20:51.583 "pending_bdev_io": 0, 00:20:51.583 "completed_nvme_io": 0, 00:20:51.583 "transports": [ 00:20:51.583 { 00:20:51.583 "trtype": "TCP" 00:20:51.583 } 00:20:51.583 ] 00:20:51.583 } 00:20:51.583 ] 00:20:51.583 }' 00:20:51.583 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:51.583 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:51.583 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:51.583 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:51.583 17:29:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4122229 00:20:59.683 Initializing NVMe Controllers 00:20:59.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:59.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:59.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:59.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:59.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:59.683 Initialization complete. Launching workers. 00:20:59.683 ======================================================== 00:20:59.683 Latency(us) 00:20:59.684 Device Information : IOPS MiB/s Average min max 00:20:59.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5669.40 22.15 11292.42 1450.84 57681.96 00:20:59.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15146.20 59.16 4225.13 1231.89 7072.21 00:20:59.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4908.50 19.17 13042.81 1576.23 59186.22 00:20:59.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5197.30 20.30 12358.74 1530.99 57895.62 00:20:59.684 ======================================================== 00:20:59.684 Total : 30921.40 120.79 8287.74 1231.89 59186.22 00:20:59.684 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.684 rmmod nvme_tcp 00:20:59.684 rmmod nvme_fabrics 00:20:59.684 rmmod nvme_keyring 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4121975 ']' 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4121975 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4121975 ']' 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4121975 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121975 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121975' 00:20:59.684 killing process with pid 4121975 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4121975 00:20:59.684 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4121975 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.945 17:29:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.299 17:29:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.299 17:29:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:03.299 00:21:03.299 real 0m50.377s 00:21:03.299 user 2m49.385s 00:21:03.299 sys 0m9.263s 00:21:03.299 17:29:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:03.299 17:29:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.299 ************************************ 00:21:03.299 END TEST nvmf_perf_adq 00:21:03.299 ************************************ 00:21:03.299 17:29:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:03.299 17:29:21 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:03.299 17:29:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:03.299 17:29:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.299 17:29:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:03.299 ************************************ 00:21:03.299 START TEST nvmf_shutdown 00:21:03.299 ************************************ 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:03.299 * Looking for test storage... 00:21:03.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:03.299 ************************************ 00:21:03.299 START TEST nvmf_shutdown_tc1 00:21:03.299 ************************************ 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.299 17:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.572 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.572 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.572 17:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:08.572 00:21:08.572 --- 10.0.0.2 ping statistics --- 00:21:08.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.572 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:21:08.572 00:21:08.572 --- 10.0.0.1 ping statistics --- 00:21:08.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.572 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:08.572 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4127465 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4127465 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4127465 ']' 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.573 17:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:08.573 [2024-07-12 17:29:27.294461] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:08.573 [2024-07-12 17:29:27.294503] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.573 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.830 [2024-07-12 17:29:27.352542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.830 [2024-07-12 17:29:27.433873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.830 [2024-07-12 17:29:27.433908] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.830 [2024-07-12 17:29:27.433915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.830 [2024-07-12 17:29:27.433921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.830 [2024-07-12 17:29:27.433926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.830 [2024-07-12 17:29:27.434021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.830 [2024-07-12 17:29:27.434037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.830 [2024-07-12 17:29:27.434151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.830 [2024-07-12 17:29:27.434153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.396 [2024-07-12 17:29:28.143213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.396 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.654 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.655 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.655 Malloc1 00:21:09.655 [2024-07-12 17:29:28.239094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.655 Malloc2 00:21:09.655 Malloc3 00:21:09.655 Malloc4 00:21:09.655 Malloc5 00:21:09.655 Malloc6 00:21:09.914 Malloc7 00:21:09.914 Malloc8 00:21:09.914 Malloc9 00:21:09.914 Malloc10 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4127746 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4127746 /var/tmp/bdevperf.sock 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4127746 ']' 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.914 { 00:21:09.914 "params": { 00:21:09.914 "name": "Nvme$subsystem", 00:21:09.914 "trtype": "$TEST_TRANSPORT", 00:21:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.914 "adrfam": "ipv4", 00:21:09.914 "trsvcid": "$NVMF_PORT", 00:21:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.914 "hdgst": ${hdgst:-false}, 00:21:09.914 "ddgst": ${ddgst:-false} 00:21:09.914 }, 00:21:09.914 "method": "bdev_nvme_attach_controller" 00:21:09.914 } 00:21:09.914 EOF 00:21:09.914 )") 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.914 { 00:21:09.914 "params": { 00:21:09.914 "name": "Nvme$subsystem", 00:21:09.914 "trtype": "$TEST_TRANSPORT", 00:21:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.914 "adrfam": "ipv4", 00:21:09.914 "trsvcid": "$NVMF_PORT", 00:21:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.914 "hdgst": ${hdgst:-false}, 00:21:09.914 "ddgst": ${ddgst:-false} 00:21:09.914 }, 00:21:09.914 "method": "bdev_nvme_attach_controller" 00:21:09.914 } 00:21:09.914 EOF 00:21:09.914 )") 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:09.914 { 00:21:09.914 "params": { 00:21:09.914 "name": "Nvme$subsystem", 00:21:09.914 "trtype": "$TEST_TRANSPORT", 00:21:09.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.914 "adrfam": "ipv4", 00:21:09.914 "trsvcid": "$NVMF_PORT", 00:21:09.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.914 "hdgst": ${hdgst:-false}, 00:21:09.914 "ddgst": ${ddgst:-false} 00:21:09.914 }, 00:21:09.914 "method": "bdev_nvme_attach_controller" 00:21:09.914 } 00:21:09.914 EOF 00:21:09.914 )") 00:21:09.914 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.174 { 00:21:10.174 "params": { 00:21:10.174 "name": "Nvme$subsystem", 00:21:10.174 "trtype": "$TEST_TRANSPORT", 00:21:10.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.174 "adrfam": "ipv4", 00:21:10.174 "trsvcid": "$NVMF_PORT", 00:21:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.174 "hdgst": ${hdgst:-false}, 00:21:10.174 "ddgst": ${ddgst:-false} 00:21:10.174 }, 00:21:10.174 "method": "bdev_nvme_attach_controller" 00:21:10.174 } 00:21:10.174 EOF 00:21:10.174 )") 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.174 { 00:21:10.174 "params": { 00:21:10.174 "name": "Nvme$subsystem", 00:21:10.174 "trtype": "$TEST_TRANSPORT", 00:21:10.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.174 "adrfam": "ipv4", 00:21:10.174 "trsvcid": "$NVMF_PORT", 00:21:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.174 "hdgst": ${hdgst:-false}, 00:21:10.174 "ddgst": ${ddgst:-false} 00:21:10.174 }, 00:21:10.174 "method": "bdev_nvme_attach_controller" 00:21:10.174 } 00:21:10.174 EOF 00:21:10.174 )") 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.174 { 00:21:10.174 "params": { 00:21:10.174 "name": "Nvme$subsystem", 00:21:10.174 "trtype": "$TEST_TRANSPORT", 00:21:10.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.174 "adrfam": "ipv4", 00:21:10.174 "trsvcid": "$NVMF_PORT", 00:21:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.174 "hdgst": ${hdgst:-false}, 00:21:10.174 "ddgst": ${ddgst:-false} 00:21:10.174 }, 00:21:10.174 "method": "bdev_nvme_attach_controller" 00:21:10.174 } 00:21:10.174 EOF 00:21:10.174 )") 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.174 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.174 { 00:21:10.174 "params": { 00:21:10.174 "name": "Nvme$subsystem", 00:21:10.174 "trtype": "$TEST_TRANSPORT", 00:21:10.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "$NVMF_PORT", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.175 "hdgst": ${hdgst:-false}, 00:21:10.175 "ddgst": ${ddgst:-false} 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 } 00:21:10.175 EOF 00:21:10.175 )") 00:21:10.175 [2024-07-12 17:29:28.715135] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:10.175 [2024-07-12 17:29:28.715183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.175 { 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme$subsystem", 00:21:10.175 "trtype": "$TEST_TRANSPORT", 00:21:10.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "$NVMF_PORT", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.175 "hdgst": ${hdgst:-false}, 00:21:10.175 "ddgst": ${ddgst:-false} 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 } 00:21:10.175 EOF 00:21:10.175 )") 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.175 { 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme$subsystem", 00:21:10.175 "trtype": "$TEST_TRANSPORT", 00:21:10.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "$NVMF_PORT", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.175 "hdgst": ${hdgst:-false}, 00:21:10.175 "ddgst": ${ddgst:-false} 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 } 00:21:10.175 EOF 00:21:10.175 )") 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:10.175 { 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme$subsystem", 00:21:10.175 "trtype": "$TEST_TRANSPORT", 00:21:10.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "$NVMF_PORT", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:10.175 "hdgst": ${hdgst:-false}, 00:21:10.175 "ddgst": ${ddgst:-false} 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 } 00:21:10.175 EOF 00:21:10.175 )") 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:10.175 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:10.175 17:29:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme1", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme2", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme3", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme4", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme5", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme6", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme7", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme8", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme9", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 },{ 00:21:10.175 "params": { 00:21:10.175 "name": "Nvme10", 00:21:10.175 "trtype": "tcp", 00:21:10.175 "traddr": "10.0.0.2", 00:21:10.175 "adrfam": "ipv4", 00:21:10.175 "trsvcid": "4420", 00:21:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:10.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:10.175 "hdgst": false, 00:21:10.175 "ddgst": false 00:21:10.175 }, 00:21:10.175 "method": "bdev_nvme_attach_controller" 00:21:10.175 }' 00:21:10.175 [2024-07-12 17:29:28.771847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.175 [2024-07-12 17:29:28.845588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4127746 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:11.551 17:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:12.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4127746 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4127465 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.486 { 00:21:12.486 "params": { 00:21:12.486 "name": "Nvme$subsystem", 00:21:12.486 "trtype": "$TEST_TRANSPORT", 00:21:12.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.486 "adrfam": "ipv4", 00:21:12.486 "trsvcid": "$NVMF_PORT", 00:21:12.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.486 "hdgst": ${hdgst:-false}, 00:21:12.486 "ddgst": ${ddgst:-false} 00:21:12.486 }, 00:21:12.486 "method": "bdev_nvme_attach_controller" 00:21:12.486 } 00:21:12.486 EOF 00:21:12.486 )") 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.486 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.486 { 00:21:12.486 "params": { 00:21:12.486 "name": "Nvme$subsystem", 00:21:12.486 "trtype": "$TEST_TRANSPORT", 00:21:12.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.486 "adrfam": "ipv4", 00:21:12.486 "trsvcid": "$NVMF_PORT", 00:21:12.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.486 "hdgst": ${hdgst:-false}, 00:21:12.486 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 [2024-07-12 17:29:31.213890] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:12.487 [2024-07-12 17:29:31.213939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4128218 ] 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:12.487 { 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme$subsystem", 00:21:12.487 "trtype": "$TEST_TRANSPORT", 00:21:12.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "$NVMF_PORT", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.487 "hdgst": ${hdgst:-false}, 00:21:12.487 "ddgst": ${ddgst:-false} 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 } 00:21:12.487 EOF 00:21:12.487 )") 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:12.487 17:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme1", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.487 "hdgst": false, 00:21:12.487 "ddgst": false 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 },{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme2", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:12.487 "hdgst": false, 00:21:12.487 "ddgst": false 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 },{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme3", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:12.487 "hdgst": false, 00:21:12.487 "ddgst": false 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 },{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme4", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:12.487 "hdgst": false, 00:21:12.487 "ddgst": false 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 },{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme5", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.487 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:12.487 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:12.487 "hdgst": false, 00:21:12.487 "ddgst": false 00:21:12.487 }, 00:21:12.487 "method": "bdev_nvme_attach_controller" 00:21:12.487 },{ 00:21:12.487 "params": { 00:21:12.487 "name": "Nvme6", 00:21:12.487 "trtype": "tcp", 00:21:12.487 "traddr": "10.0.0.2", 00:21:12.487 "adrfam": "ipv4", 00:21:12.487 "trsvcid": "4420", 00:21:12.488 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:12.488 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:12.488 "hdgst": false, 00:21:12.488 "ddgst": false 00:21:12.488 }, 00:21:12.488 "method": "bdev_nvme_attach_controller" 00:21:12.488 },{ 00:21:12.488 "params": { 00:21:12.488 "name": "Nvme7", 00:21:12.488 "trtype": "tcp", 00:21:12.488 "traddr": "10.0.0.2", 00:21:12.488 "adrfam": "ipv4", 00:21:12.488 "trsvcid": "4420", 00:21:12.488 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:12.488 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:12.488 "hdgst": false, 00:21:12.488 "ddgst": false 00:21:12.488 }, 00:21:12.488 "method": "bdev_nvme_attach_controller" 00:21:12.488 },{ 00:21:12.488 "params": { 00:21:12.488 "name": "Nvme8", 00:21:12.488 "trtype": "tcp", 00:21:12.488 "traddr": "10.0.0.2", 00:21:12.488 "adrfam": "ipv4", 00:21:12.488 "trsvcid": "4420", 00:21:12.488 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:12.488 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:12.488 "hdgst": false, 00:21:12.488 "ddgst": false 00:21:12.488 }, 00:21:12.488 "method": "bdev_nvme_attach_controller" 00:21:12.488 },{ 00:21:12.488 "params": { 00:21:12.488 "name": "Nvme9", 00:21:12.488 "trtype": "tcp", 00:21:12.488 "traddr": "10.0.0.2", 00:21:12.488 "adrfam": "ipv4", 00:21:12.488 "trsvcid": "4420", 00:21:12.488 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:12.488 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:12.488 "hdgst": false, 00:21:12.488 "ddgst": false 00:21:12.488 }, 00:21:12.488 "method": "bdev_nvme_attach_controller" 00:21:12.488 },{ 00:21:12.488 "params": { 00:21:12.488 "name": "Nvme10", 00:21:12.488 "trtype": "tcp", 00:21:12.488 "traddr": "10.0.0.2", 00:21:12.488 "adrfam": "ipv4", 00:21:12.488 "trsvcid": "4420", 00:21:12.488 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:12.488 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:12.488 "hdgst": false, 00:21:12.488 "ddgst": false 00:21:12.488 }, 00:21:12.488 "method": "bdev_nvme_attach_controller" 00:21:12.488 }' 00:21:12.488 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.746 [2024-07-12 17:29:31.269668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.746 [2024-07-12 17:29:31.344086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.122 Running I/O for 1 seconds... 00:21:15.059 00:21:15.059 Latency(us) 00:21:15.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.059 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme1n1 : 1.15 278.27 17.39 0.00 0.00 226686.00 15956.59 217009.64 00:21:15.059 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme2n1 : 1.15 277.25 17.33 0.00 0.00 225711.50 18578.03 213362.42 00:21:15.059 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme3n1 : 1.16 276.43 17.28 0.00 0.00 223242.69 15728.64 218833.25 00:21:15.059 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme4n1 : 1.11 288.14 18.01 0.00 0.00 210637.87 13449.13 216097.84 00:21:15.059 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme5n1 : 1.16 274.71 17.17 0.00 0.00 218329.80 17324.30 214274.23 00:21:15.059 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme6n1 : 1.16 275.43 17.21 0.00 0.00 213839.52 18122.13 211538.81 00:21:15.059 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme7n1 : 1.14 281.68 17.61 0.00 0.00 206292.81 14417.92 217009.64 00:21:15.059 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme8n1 : 1.14 279.69 17.48 0.00 0.00 204719.50 15956.59 215186.03 00:21:15.059 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme9n1 : 1.17 274.01 17.13 0.00 0.00 206259.60 18692.01 238892.97 00:21:15.059 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:15.059 Verification LBA range: start 0x0 length 0x400 00:21:15.059 Nvme10n1 : 1.17 273.36 17.08 0.00 0.00 203703.34 16298.52 222480.47 00:21:15.059 =================================================================================================================== 00:21:15.059 Total : 2778.97 173.69 0.00 0.00 213942.26 13449.13 238892.97 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.316 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.316 rmmod nvme_tcp 00:21:15.316 rmmod nvme_fabrics 00:21:15.316 rmmod nvme_keyring 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4127465 ']' 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4127465 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 4127465 ']' 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 4127465 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4127465 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4127465' 00:21:15.575 killing process with pid 4127465 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 4127465 00:21:15.575 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 4127465 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.833 17:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:18.370 00:21:18.370 real 0m14.738s 00:21:18.370 user 0m33.616s 00:21:18.370 sys 0m5.312s 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:18.370 ************************************ 00:21:18.370 END TEST nvmf_shutdown_tc1 00:21:18.370 ************************************ 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.370 ************************************ 00:21:18.370 START TEST nvmf_shutdown_tc2 00:21:18.370 ************************************ 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.370 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.370 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.370 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.371 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.371 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:18.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:21:18.371 00:21:18.371 --- 10.0.0.2 ping statistics --- 00:21:18.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.371 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:21:18.371 00:21:18.371 --- 10.0.0.1 ping statistics --- 00:21:18.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.371 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4129236 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4129236 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4129236 ']' 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.371 17:29:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:18.371 [2024-07-12 17:29:36.983335] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:18.371 [2024-07-12 17:29:36.983373] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.371 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.371 [2024-07-12 17:29:37.039834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.371 [2024-07-12 17:29:37.120844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.371 [2024-07-12 17:29:37.120878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.371 [2024-07-12 17:29:37.120885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.371 [2024-07-12 17:29:37.120891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.371 [2024-07-12 17:29:37.120896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.371 [2024-07-12 17:29:37.121118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.371 [2024-07-12 17:29:37.121202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.371 [2024-07-12 17:29:37.121310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.371 [2024-07-12 17:29:37.121311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.309 [2024-07-12 17:29:37.835461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.309 17:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.310 Malloc1 00:21:19.310 [2024-07-12 17:29:37.931313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.310 Malloc2 00:21:19.310 Malloc3 00:21:19.310 Malloc4 00:21:19.310 Malloc5 00:21:19.569 Malloc6 00:21:19.569 Malloc7 00:21:19.569 Malloc8 00:21:19.569 Malloc9 00:21:19.569 Malloc10 00:21:19.569 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.569 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:19.569 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.569 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4129520 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4129520 /var/tmp/bdevperf.sock 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4129520 ']' 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 [2024-07-12 17:29:38.404711] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:19.829 [2024-07-12 17:29:38.404756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4129520 ] 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.829 "trsvcid": "$NVMF_PORT", 00:21:19.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.829 "hdgst": ${hdgst:-false}, 00:21:19.829 "ddgst": ${ddgst:-false} 00:21:19.829 }, 00:21:19.829 "method": "bdev_nvme_attach_controller" 00:21:19.829 } 00:21:19.829 EOF 00:21:19.829 )") 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:19.829 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:19.829 { 00:21:19.829 "params": { 00:21:19.829 "name": "Nvme$subsystem", 00:21:19.829 "trtype": "$TEST_TRANSPORT", 00:21:19.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.829 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "$NVMF_PORT", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.830 "hdgst": ${hdgst:-false}, 00:21:19.830 "ddgst": ${ddgst:-false} 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 } 00:21:19.830 EOF 00:21:19.830 )") 00:21:19.830 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:19.830 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.830 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:19.830 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:19.830 17:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme1", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme2", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme3", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme4", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme5", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme6", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme7", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme8", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme9", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 },{ 00:21:19.830 "params": { 00:21:19.830 "name": "Nvme10", 00:21:19.830 "trtype": "tcp", 00:21:19.830 "traddr": "10.0.0.2", 00:21:19.830 "adrfam": "ipv4", 00:21:19.830 "trsvcid": "4420", 00:21:19.830 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:19.830 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:19.830 "hdgst": false, 00:21:19.830 "ddgst": false 00:21:19.830 }, 00:21:19.830 "method": "bdev_nvme_attach_controller" 00:21:19.830 }' 00:21:19.830 [2024-07-12 17:29:38.459104] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.830 [2024-07-12 17:29:38.533144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.736 Running I/O for 10 seconds... 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:21.736 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:21.737 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=86 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 86 -ge 100 ']' 00:21:21.996 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:22.255 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:22.255 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4129520 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4129520 ']' 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4129520 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4129520 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4129520' 00:21:22.256 killing process with pid 4129520 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4129520 00:21:22.256 17:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4129520 00:21:22.256 Received shutdown signal, test time was about 0.898355 seconds 00:21:22.256 00:21:22.256 Latency(us) 00:21:22.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme1n1 : 0.88 291.56 18.22 0.00 0.00 217130.30 17552.25 209715.20 00:21:22.256 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme2n1 : 0.90 285.18 17.82 0.00 0.00 218087.74 16298.52 218833.25 00:21:22.256 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme3n1 : 0.87 293.21 18.33 0.00 0.00 207967.72 14702.86 217921.45 00:21:22.256 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme4n1 : 0.88 292.24 18.26 0.00 0.00 204708.51 22681.15 207891.59 00:21:22.256 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme5n1 : 0.90 285.98 17.87 0.00 0.00 205535.50 17324.30 218833.25 00:21:22.256 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme6n1 : 0.89 294.45 18.40 0.00 0.00 194853.99 4074.63 198773.54 00:21:22.256 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme7n1 : 0.89 287.25 17.95 0.00 0.00 196584.51 14019.01 219745.06 00:21:22.256 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme8n1 : 0.88 289.48 18.09 0.00 0.00 190791.90 15500.69 220656.86 00:21:22.256 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme9n1 : 0.87 221.21 13.83 0.00 0.00 244025.43 18122.13 249834.63 00:21:22.256 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:22.256 Verification LBA range: start 0x0 length 0x400 00:21:22.256 Nvme10n1 : 0.86 222.09 13.88 0.00 0.00 237657.93 19033.93 228863.11 00:21:22.256 =================================================================================================================== 00:21:22.256 Total : 2762.64 172.67 0.00 0.00 210170.90 4074.63 249834.63 00:21:22.515 17:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4129236 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:23.453 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.454 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:23.454 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.454 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:23.454 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.454 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.454 rmmod nvme_tcp 00:21:23.713 rmmod nvme_fabrics 00:21:23.713 rmmod nvme_keyring 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4129236 ']' 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4129236 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4129236 ']' 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4129236 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4129236 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4129236' 00:21:23.713 killing process with pid 4129236 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4129236 00:21:23.713 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4129236 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.973 17:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.510 00:21:26.510 real 0m8.120s 00:21:26.510 user 0m25.110s 00:21:26.510 sys 0m1.270s 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.510 ************************************ 00:21:26.510 END TEST nvmf_shutdown_tc2 00:21:26.510 ************************************ 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:26.510 ************************************ 00:21:26.510 START TEST nvmf_shutdown_tc3 00:21:26.510 ************************************ 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.510 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:26.511 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:26.511 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:26.511 Found net devices under 0000:86:00.0: cvl_0_0 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:26.511 Found net devices under 0000:86:00.1: cvl_0_1 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.511 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.512 17:29:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:26.512 00:21:26.512 --- 10.0.0.2 ping statistics --- 00:21:26.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.512 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:26.512 00:21:26.512 --- 10.0.0.1 ping statistics --- 00:21:26.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.512 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4130778 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4130778 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4130778 ']' 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.512 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:26.512 [2024-07-12 17:29:45.164506] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:26.512 [2024-07-12 17:29:45.164547] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.512 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.512 [2024-07-12 17:29:45.222328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.864 [2024-07-12 17:29:45.304211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.864 [2024-07-12 17:29:45.304244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.864 [2024-07-12 17:29:45.304251] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.864 [2024-07-12 17:29:45.304257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.864 [2024-07-12 17:29:45.304266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.864 [2024-07-12 17:29:45.304365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.864 [2024-07-12 17:29:45.304448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.864 [2024-07-12 17:29:45.304665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.864 [2024-07-12 17:29:45.304667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.434 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.434 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:27.434 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.434 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.434 17:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.434 [2024-07-12 17:29:46.034323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.434 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.434 Malloc1 00:21:27.434 [2024-07-12 17:29:46.130233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.434 Malloc2 00:21:27.434 Malloc3 00:21:27.693 Malloc4 00:21:27.693 Malloc5 00:21:27.693 Malloc6 00:21:27.693 Malloc7 00:21:27.693 Malloc8 00:21:27.693 Malloc9 00:21:27.953 Malloc10 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4131067 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4131067 /var/tmp/bdevperf.sock 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4131067 ']' 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:27.953 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 [2024-07-12 17:29:46.603010] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:27.954 [2024-07-12 17:29:46.603059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4131067 ] 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.954 { 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme$subsystem", 00:21:27.954 "trtype": "$TEST_TRANSPORT", 00:21:27.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "$NVMF_PORT", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.954 "hdgst": ${hdgst:-false}, 00:21:27.954 "ddgst": ${ddgst:-false} 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 } 00:21:27.954 EOF 00:21:27.954 )") 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:27.954 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:27.954 17:29:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme1", 00:21:27.954 "trtype": "tcp", 00:21:27.954 "traddr": "10.0.0.2", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "4420", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.954 "hdgst": false, 00:21:27.954 "ddgst": false 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 },{ 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme2", 00:21:27.954 "trtype": "tcp", 00:21:27.954 "traddr": "10.0.0.2", 00:21:27.954 "adrfam": "ipv4", 00:21:27.954 "trsvcid": "4420", 00:21:27.954 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.954 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.954 "hdgst": false, 00:21:27.954 "ddgst": false 00:21:27.954 }, 00:21:27.954 "method": "bdev_nvme_attach_controller" 00:21:27.954 },{ 00:21:27.954 "params": { 00:21:27.954 "name": "Nvme3", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme4", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme5", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme6", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme7", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme8", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme9", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 },{ 00:21:27.955 "params": { 00:21:27.955 "name": "Nvme10", 00:21:27.955 "trtype": "tcp", 00:21:27.955 "traddr": "10.0.0.2", 00:21:27.955 "adrfam": "ipv4", 00:21:27.955 "trsvcid": "4420", 00:21:27.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.955 "hdgst": false, 00:21:27.955 "ddgst": false 00:21:27.955 }, 00:21:27.955 "method": "bdev_nvme_attach_controller" 00:21:27.955 }' 00:21:27.955 [2024-07-12 17:29:46.659123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.214 [2024-07-12 17:29:46.732348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.589 Running I/O for 10 seconds... 00:21:29.589 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.589 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:29.589 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:29.589 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.589 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:29.847 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=73 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:21:30.105 17:29:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=205 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 205 -ge 100 ']' 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4130778 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 4130778 ']' 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 4130778 00:21:30.363 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4130778 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4130778' 00:21:30.635 killing process with pid 4130778 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 4130778 00:21:30.635 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 4130778 00:21:30.635 [2024-07-12 17:29:49.184741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.184998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.635 [2024-07-12 17:29:49.185158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.185166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001430 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.186996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.187120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003e30 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.188345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10018d0 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.188370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10018d0 is same with the state(5) to be set 00:21:30.636 [2024-07-12 17:29:49.188646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.636 [2024-07-12 17:29:49.188831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.636 [2024-07-12 17:29:49.188837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.188992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.188999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with [2024-07-12 17:29:49.189158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:30.637 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.189261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 [2024-07-12 17:29:49.189305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 [2024-07-12 17:29:49.189313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-07-12 17:29:49.189321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.637 the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.189329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.637 the state(5) to be set 00:21:30.637 [2024-07-12 17:29:49.189338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with [2024-07-12 17:29:49.189373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12the state(5) to be set 00:21:30.638 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-12 17:29:49.189418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.189427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12[2024-07-12 17:29:49.189471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-07-12 17:29:49.189492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.189501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12[2024-07-12 17:29:49.189546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.189554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with [2024-07-12 17:29:49.189631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12the state(5) to be set 00:21:30.638 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001d70 is same with the state(5) to be set 00:21:30.638 [2024-07-12 17:29:49.189656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.638 [2024-07-12 17:29:49.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.189765] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a1a490 was disconnected and freed. reset controller. 00:21:30.638 [2024-07-12 17:29:49.190318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.638 [2024-07-12 17:29:49.190337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.638 [2024-07-12 17:29:49.190348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.638 [2024-07-12 17:29:49.190354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b078b0 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.190429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e8d0 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.190503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c70 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.190598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.639 [2024-07-12 17:29:49.190655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.639 [2024-07-12 17:29:49.190661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b27050 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.192096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:30.639 [2024-07-12 17:29:49.192127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e8d0 (9): Bad file descriptor 00:21:30.639 [2024-07-12 17:29:49.193775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.193995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.639 [2024-07-12 17:29:49.194021] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.640 [2024-07-12 17:29:49.194033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with [2024-07-12 17:29:49.194164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.640 the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e8d0 with addr=10.0.0.2, port=4420 00:21:30.640 [2024-07-12 17:29:49.194183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e8d0 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002230 is same with the state(5) to be set 00:21:30.640 [2024-07-12 17:29:49.194233] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.640 [2024-07-12 17:29:49.194870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e8d0 (9): Bad file descriptor 00:21:30.640 [2024-07-12 17:29:49.194947] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.640 [2024-07-12 17:29:49.195352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:30.640 [2024-07-12 17:29:49.195369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:30.640 [2024-07-12 17:29:49.195387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:30.640 [2024-07-12 17:29:49.195508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.640 [2024-07-12 17:29:49.195906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.640 [2024-07-12 17:29:49.195913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.195928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.195943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.195957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.195971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.195985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.195994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.196146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12[2024-07-12 17:29:49.196259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with [2024-07-12 17:29:49.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12the state(5) to be set 00:21:30.641 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with [2024-07-12 17:29:49.196285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:30.641 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.641 [2024-07-12 17:29:49.196347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.641 [2024-07-12 17:29:49.196352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.641 [2024-07-12 17:29:49.196354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with [2024-07-12 17:29:49.196361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12the state(5) to be set 00:21:30.642 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.642 [2024-07-12 17:29:49.196369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.642 [2024-07-12 17:29:49.196379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12[2024-07-12 17:29:49.196387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.642 the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.642 [2024-07-12 17:29:49.196403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with [2024-07-12 17:29:49.196404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:12the state(5) to be set 00:21:30.642 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.642 [2024-07-12 17:29:49.196412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with [2024-07-12 17:29:49.196413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:30.642 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.642 [2024-07-12 17:29:49.196421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196474] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x194e040 was disconnected and freed. reset controller. 00:21:30.642 [2024-07-12 17:29:49.196480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1002b90 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.196668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.642 [2024-07-12 17:29:49.197810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:30.642 [2024-07-12 17:29:49.197854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:21:30.642 [2024-07-12 17:29:49.198086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.642 [2024-07-12 17:29:49.198381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10034d0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.198550] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.643 [2024-07-12 17:29:49.198944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.198960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.198973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.198989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.198996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.198996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with [2024-07-12 17:29:49.199005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:12the state(5) to be set 00:21:30.643 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.199015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.199018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.199026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.199034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.199041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.199048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.199059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.199066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:12[2024-07-12 17:29:49.199073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 17:29:49.199083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.643 [2024-07-12 17:29:49.199099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.643 [2024-07-12 17:29:49.199106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95de0 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199163] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a95de0 was disconnected and freed. reset controller. 00:21:30.643 [2024-07-12 17:29:49.199168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.643 [2024-07-12 17:29:49.199273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003970 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.199436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.644 [2024-07-12 17:29:49.199454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1340 with addr=10.0.0.2, port=4420 00:21:30.644 [2024-07-12 17:29:49.199461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1340 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.200329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:30.644 [2024-07-12 17:29:49.200369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975190 (9): Bad file descriptor 00:21:30.644 [2024-07-12 17:29:49.200387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:21:30.644 [2024-07-12 17:29:49.200409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e0d0 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.200480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b078b0 (9): Bad file descriptor 00:21:30.644 [2024-07-12 17:29:49.200509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996b30 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.200575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c70 (9): Bad file descriptor 00:21:30.644 [2024-07-12 17:29:49.200598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999bf0 is same with the state(5) to be set 00:21:30.644 [2024-07-12 17:29:49.200669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b27050 (9): Bad file descriptor 00:21:30.644 [2024-07-12 17:29:49.200693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.644 [2024-07-12 17:29:49.200715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.644 [2024-07-12 17:29:49.200721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.645 [2024-07-12 17:29:49.200729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.200737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.645 [2024-07-12 17:29:49.200743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.200749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198f1d0 is same with the state(5) to be set 00:21:30.645 [2024-07-12 17:29:49.200827] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.645 [2024-07-12 17:29:49.200963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:30.645 [2024-07-12 17:29:49.200973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:30.645 [2024-07-12 17:29:49.200982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:30.645 [2024-07-12 17:29:49.201325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.645 [2024-07-12 17:29:49.201438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.645 [2024-07-12 17:29:49.201451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1975190 with addr=10.0.0.2, port=4420 00:21:30.645 [2024-07-12 17:29:49.201458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975190 is same with the state(5) to be set 00:21:30.645 [2024-07-12 17:29:49.201521] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.645 [2024-07-12 17:29:49.201581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975190 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.201657] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:30.645 [2024-07-12 17:29:49.201672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:30.645 [2024-07-12 17:29:49.201678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:30.645 [2024-07-12 17:29:49.201684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:30.645 [2024-07-12 17:29:49.201727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.645 [2024-07-12 17:29:49.203253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:30.645 [2024-07-12 17:29:49.203469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.645 [2024-07-12 17:29:49.203481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e8d0 with addr=10.0.0.2, port=4420 00:21:30.645 [2024-07-12 17:29:49.203488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e8d0 is same with the state(5) to be set 00:21:30.645 [2024-07-12 17:29:49.203520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e8d0 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.203552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:30.645 [2024-07-12 17:29:49.203558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:30.645 [2024-07-12 17:29:49.203566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:30.645 [2024-07-12 17:29:49.203599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.645 [2024-07-12 17:29:49.208595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:30.645 [2024-07-12 17:29:49.208865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.645 [2024-07-12 17:29:49.208876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1340 with addr=10.0.0.2, port=4420 00:21:30.645 [2024-07-12 17:29:49.208887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1340 is same with the state(5) to be set 00:21:30.645 [2024-07-12 17:29:49.208919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.208950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:30.645 [2024-07-12 17:29:49.208956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:30.645 [2024-07-12 17:29:49.208962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:30.645 [2024-07-12 17:29:49.208995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.645 [2024-07-12 17:29:49.210373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e0d0 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.210399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1996b30 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.210416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1999bf0 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.210434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198f1d0 (9): Bad file descriptor 00:21:30.645 [2024-07-12 17:29:49.210520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.645 [2024-07-12 17:29:49.210702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.645 [2024-07-12 17:29:49.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.210992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.210999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.646 [2024-07-12 17:29:49.211125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.646 [2024-07-12 17:29:49.211131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.211458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.211465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93ea0 is same with the state(5) to be set 00:21:30.647 [2024-07-12 17:29:49.212485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.647 [2024-07-12 17:29:49.212579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.647 [2024-07-12 17:29:49.212587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.212842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.212850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.648 [2024-07-12 17:29:49.219373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.648 [2024-07-12 17:29:49.219386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.649 [2024-07-12 17:29:49.219667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.649 [2024-07-12 17:29:49.219675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.219797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.219804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1b920 is same with the state(5) to be set 00:21:30.650 [2024-07-12 17:29:49.220852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.220989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.220997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.650 [2024-07-12 17:29:49.221112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.650 [2024-07-12 17:29:49.221118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.651 [2024-07-12 17:29:49.221473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.651 [2024-07-12 17:29:49.221482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.221790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.221797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7cef0 is same with the state(5) to be set 00:21:30.652 [2024-07-12 17:29:49.223133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.652 [2024-07-12 17:29:49.223158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:30.652 [2024-07-12 17:29:49.223170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:30.652 [2024-07-12 17:29:49.223645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.652 [2024-07-12 17:29:49.223666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952c70 with addr=10.0.0.2, port=4420 00:21:30.652 [2024-07-12 17:29:49.223676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c70 is same with the state(5) to be set 00:21:30.652 [2024-07-12 17:29:49.223851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.652 [2024-07-12 17:29:49.223864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b27050 with addr=10.0.0.2, port=4420 00:21:30.652 [2024-07-12 17:29:49.223873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b27050 is same with the state(5) to be set 00:21:30.652 [2024-07-12 17:29:49.224102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.652 [2024-07-12 17:29:49.224115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b078b0 with addr=10.0.0.2, port=4420 00:21:30.652 [2024-07-12 17:29:49.224123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b078b0 is same with the state(5) to be set 00:21:30.652 [2024-07-12 17:29:49.224751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.224766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.224790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.224801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.652 [2024-07-12 17:29:49.224810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.652 [2024-07-12 17:29:49.224821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.224991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.653 [2024-07-12 17:29:49.225464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.653 [2024-07-12 17:29:49.225475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.225983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.225992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.226003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.226012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.226023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.226031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.226041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194cb70 is same with the state(5) to be set 00:21:30.654 [2024-07-12 17:29:49.227389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.654 [2024-07-12 17:29:49.227561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.654 [2024-07-12 17:29:49.227573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.227996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.228005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.228016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.228025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.228036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.228044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.228055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.228066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.655 [2024-07-12 17:29:49.228077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.655 [2024-07-12 17:29:49.228085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.228664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.228674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a94910 is same with the state(5) to be set 00:21:30.656 [2024-07-12 17:29:49.230010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.230039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.230048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.230060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.230069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.230080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.230088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.656 [2024-07-12 17:29:49.230099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.656 [2024-07-12 17:29:49.230108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.657 [2024-07-12 17:29:49.230822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.657 [2024-07-12 17:29:49.230833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.230984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.230995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.231199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.231209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a972b0 is same with the state(5) to be set 00:21:30.658 [2024-07-12 17:29:49.232529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.232982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.232993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.233002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.233013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.233033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.658 [2024-07-12 17:29:49.233042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.658 [2024-07-12 17:29:49.233053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.659 [2024-07-12 17:29:49.233632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.659 [2024-07-12 17:29:49.233639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ba60 is same with the state(5) to be set 00:21:30.659 [2024-07-12 17:29:49.236272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:30.659 [2024-07-12 17:29:49.236295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:30.659 [2024-07-12 17:29:49.236303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:30.659 [2024-07-12 17:29:49.236312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:30.659 [2024-07-12 17:29:49.236319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:30.659 [2024-07-12 17:29:49.236358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c70 (9): Bad file descriptor 00:21:30.659 [2024-07-12 17:29:49.236369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b27050 (9): Bad file descriptor 00:21:30.659 [2024-07-12 17:29:49.236381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b078b0 (9): Bad file descriptor 00:21:30.659 [2024-07-12 17:29:49.236413] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.659 [2024-07-12 17:29:49.236426] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.659 [2024-07-12 17:29:49.236437] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.659 [2024-07-12 17:29:49.236446] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.659 [2024-07-12 17:29:49.236456] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.236521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:30.660 task offset: 29184 on job bdev=Nvme2n1 fails 00:21:30.660 00:21:30.660 Latency(us) 00:21:30.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.660 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme1n1 ended in about 0.88 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme1n1 : 0.88 229.44 14.34 72.69 0.00 209596.94 22339.23 195126.32 00:21:30.660 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme2n1 ended in about 0.86 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme2n1 : 0.86 223.33 13.96 74.44 0.00 208643.51 5356.86 211538.81 00:21:30.660 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme3n1 ended in about 0.89 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme3n1 : 0.89 216.04 13.50 72.01 0.00 211951.97 13905.03 223392.28 00:21:30.660 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme4n1 : 0.90 214.52 13.41 71.51 0.00 209524.20 14019.01 230686.72 00:21:30.660 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme5n1 ended in about 0.87 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme5n1 : 0.87 228.72 14.30 67.00 0.00 198174.23 1787.99 218833.25 00:21:30.660 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme6n1 ended in about 0.90 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme6n1 : 0.90 219.46 13.72 71.30 0.00 198413.82 8833.11 211538.81 00:21:30.660 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme7n1 ended in about 0.87 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme7n1 : 0.87 294.83 18.43 10.37 0.00 184141.00 11340.58 214274.23 00:21:30.660 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme8n1 ended in about 0.90 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme8n1 : 0.90 217.73 13.61 66.65 0.00 194434.89 31001.38 198773.54 00:21:30.660 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme9n1 : 0.90 141.82 8.86 70.91 0.00 255743.85 17894.18 225215.89 00:21:30.660 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.660 Job: Nvme10n1 ended in about 0.89 seconds with error 00:21:30.660 Verification LBA range: start 0x0 length 0x400 00:21:30.660 Nvme10n1 : 0.89 143.71 8.98 71.85 0.00 246596.42 19375.86 242540.19 00:21:30.660 =================================================================================================================== 00:21:30.660 Total : 2129.61 133.10 648.74 0.00 209529.34 1787.99 242540.19 00:21:30.660 [2024-07-12 17:29:49.264749] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:30.660 [2024-07-12 17:29:49.264795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:30.660 [2024-07-12 17:29:49.265108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.265129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1975190 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.265141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975190 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.265314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.265327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e8d0 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.265336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e8d0 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.265556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.265570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a1340 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.265580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1340 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.265748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.265761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x198f1d0 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.265770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198f1d0 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.266012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.266024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1996b30 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.266033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1996b30 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.266042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.266050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.266061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.660 [2024-07-12 17:29:49.266078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.266086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.266094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:30.660 [2024-07-12 17:29:49.266107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.266114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.266122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:30.660 [2024-07-12 17:29:49.267362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.267381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.267389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.267602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.267617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1999bf0 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.267627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999bf0 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.267860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.267872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e0d0 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.267882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e0d0 is same with the state(5) to be set 00:21:30.660 [2024-07-12 17:29:49.267897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975190 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.267910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e8d0 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.267923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1340 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.267938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198f1d0 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.267949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1996b30 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.267998] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.268012] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.268025] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.268037] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.268049] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:30.660 [2024-07-12 17:29:49.268132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1999bf0 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.268145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e0d0 (9): Bad file descriptor 00:21:30.660 [2024-07-12 17:29:49.268155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:30.660 [2024-07-12 17:29:49.268354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:30.660 [2024-07-12 17:29:49.268364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.660 [2024-07-12 17:29:49.268375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:30.660 [2024-07-12 17:29:49.268467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:30.660 [2024-07-12 17:29:49.268475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:30.660 [2024-07-12 17:29:49.268497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.660 [2024-07-12 17:29:49.268781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.660 [2024-07-12 17:29:49.268795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b078b0 with addr=10.0.0.2, port=4420 00:21:30.660 [2024-07-12 17:29:49.268806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b078b0 is same with the state(5) to be set 00:21:30.661 [2024-07-12 17:29:49.269047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.661 [2024-07-12 17:29:49.269061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b27050 with addr=10.0.0.2, port=4420 00:21:30.661 [2024-07-12 17:29:49.269070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b27050 is same with the state(5) to be set 00:21:30.661 [2024-07-12 17:29:49.269299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.661 [2024-07-12 17:29:49.269312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952c70 with addr=10.0.0.2, port=4420 00:21:30.661 [2024-07-12 17:29:49.269321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c70 is same with the state(5) to be set 00:21:30.661 [2024-07-12 17:29:49.269357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b078b0 (9): Bad file descriptor 00:21:30.661 [2024-07-12 17:29:49.269369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b27050 (9): Bad file descriptor 00:21:30.661 [2024-07-12 17:29:49.269386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c70 (9): Bad file descriptor 00:21:30.661 [2024-07-12 17:29:49.269417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:30.661 [2024-07-12 17:29:49.269426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:30.661 [2024-07-12 17:29:49.269435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:30.661 [2024-07-12 17:29:49.269446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:30.661 [2024-07-12 17:29:49.269454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:30.661 [2024-07-12 17:29:49.269462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:30.661 [2024-07-12 17:29:49.269473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.661 [2024-07-12 17:29:49.269481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:30.661 [2024-07-12 17:29:49.269492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.661 [2024-07-12 17:29:49.269524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.661 [2024-07-12 17:29:49.269532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.661 [2024-07-12 17:29:49.269540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.919 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:30.919 17:29:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4131067 00:21:31.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4131067) - No such process 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.854 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.113 rmmod nvme_tcp 00:21:32.113 rmmod nvme_fabrics 00:21:32.113 rmmod nvme_keyring 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.113 17:29:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.017 00:21:34.017 real 0m7.893s 00:21:34.017 user 0m19.948s 00:21:34.017 sys 0m1.287s 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.017 ************************************ 00:21:34.017 END TEST nvmf_shutdown_tc3 00:21:34.017 ************************************ 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:34.017 00:21:34.017 real 0m31.063s 00:21:34.017 user 1m18.819s 00:21:34.017 sys 0m8.056s 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.017 17:29:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.017 ************************************ 00:21:34.017 END TEST nvmf_shutdown 00:21:34.017 ************************************ 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:34.275 17:29:52 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.275 17:29:52 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.275 17:29:52 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:34.275 17:29:52 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.275 17:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.275 ************************************ 00:21:34.275 START TEST nvmf_multicontroller 00:21:34.275 ************************************ 00:21:34.275 17:29:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.275 * Looking for test storage... 00:21:34.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.276 17:29:52 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.276 17:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:39.548 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:39.548 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:39.548 Found net devices under 0000:86:00.0: cvl_0_0 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:39.548 Found net devices under 0000:86:00.1: cvl_0_1 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.548 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.549 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:21:39.809 00:21:39.809 --- 10.0.0.2 ping statistics --- 00:21:39.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.809 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:21:39.809 00:21:39.809 --- 10.0.0.1 ping statistics --- 00:21:39.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.809 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4135103 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4135103 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4135103 ']' 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.809 17:29:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.809 [2024-07-12 17:29:58.443849] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:39.809 [2024-07-12 17:29:58.443891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.809 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.809 [2024-07-12 17:29:58.502548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:39.809 [2024-07-12 17:29:58.581629] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.809 [2024-07-12 17:29:58.581664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.809 [2024-07-12 17:29:58.581672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.809 [2024-07-12 17:29:58.581678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.809 [2024-07-12 17:29:58.581682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.809 [2024-07-12 17:29:58.581777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.809 [2024-07-12 17:29:58.581861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.809 [2024-07-12 17:29:58.581863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 [2024-07-12 17:29:59.293071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 Malloc0 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 [2024-07-12 17:29:59.355855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 [2024-07-12 17:29:59.363783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 Malloc1 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.748 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4135350 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4135350 /var/tmp/bdevperf.sock 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4135350 ']' 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.749 17:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.687 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.687 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 NVMe0n1 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.688 1 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 request: 00:21:41.688 { 00:21:41.688 "name": "NVMe0", 00:21:41.688 "trtype": "tcp", 00:21:41.688 "traddr": "10.0.0.2", 00:21:41.688 "adrfam": "ipv4", 00:21:41.688 "trsvcid": "4420", 00:21:41.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.688 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:41.688 "hostaddr": "10.0.0.2", 00:21:41.688 "hostsvcid": "60000", 00:21:41.688 "prchk_reftag": false, 00:21:41.688 "prchk_guard": false, 00:21:41.688 "hdgst": false, 00:21:41.688 "ddgst": false, 00:21:41.688 "method": "bdev_nvme_attach_controller", 00:21:41.688 "req_id": 1 00:21:41.688 } 00:21:41.688 Got JSON-RPC error response 00:21:41.688 response: 00:21:41.688 { 00:21:41.688 "code": -114, 00:21:41.688 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:41.688 } 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.688 request: 00:21:41.688 { 00:21:41.688 "name": "NVMe0", 00:21:41.688 "trtype": "tcp", 00:21:41.688 "traddr": "10.0.0.2", 00:21:41.688 "adrfam": "ipv4", 00:21:41.688 "trsvcid": "4420", 00:21:41.688 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:41.688 "hostaddr": "10.0.0.2", 00:21:41.688 "hostsvcid": "60000", 00:21:41.688 "prchk_reftag": false, 00:21:41.688 "prchk_guard": false, 00:21:41.688 "hdgst": false, 00:21:41.688 "ddgst": false, 00:21:41.688 "method": "bdev_nvme_attach_controller", 00:21:41.688 "req_id": 1 00:21:41.688 } 00:21:41.688 Got JSON-RPC error response 00:21:41.688 response: 00:21:41.688 { 00:21:41.688 "code": -114, 00:21:41.688 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:41.688 } 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.688 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 request: 00:21:41.948 { 00:21:41.948 "name": "NVMe0", 00:21:41.948 "trtype": "tcp", 00:21:41.948 "traddr": "10.0.0.2", 00:21:41.948 "adrfam": "ipv4", 00:21:41.948 "trsvcid": "4420", 00:21:41.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.948 "hostaddr": "10.0.0.2", 00:21:41.948 "hostsvcid": "60000", 00:21:41.948 "prchk_reftag": false, 00:21:41.948 "prchk_guard": false, 00:21:41.948 "hdgst": false, 00:21:41.948 "ddgst": false, 00:21:41.948 "multipath": "disable", 00:21:41.948 "method": "bdev_nvme_attach_controller", 00:21:41.948 "req_id": 1 00:21:41.948 } 00:21:41.948 Got JSON-RPC error response 00:21:41.948 response: 00:21:41.948 { 00:21:41.948 "code": -114, 00:21:41.948 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:41.948 } 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 request: 00:21:41.948 { 00:21:41.948 "name": "NVMe0", 00:21:41.948 "trtype": "tcp", 00:21:41.948 "traddr": "10.0.0.2", 00:21:41.948 "adrfam": "ipv4", 00:21:41.948 "trsvcid": "4420", 00:21:41.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.948 "hostaddr": "10.0.0.2", 00:21:41.948 "hostsvcid": "60000", 00:21:41.948 "prchk_reftag": false, 00:21:41.948 "prchk_guard": false, 00:21:41.948 "hdgst": false, 00:21:41.948 "ddgst": false, 00:21:41.948 "multipath": "failover", 00:21:41.948 "method": "bdev_nvme_attach_controller", 00:21:41.948 "req_id": 1 00:21:41.948 } 00:21:41.948 Got JSON-RPC error response 00:21:41.948 response: 00:21:41.948 { 00:21:41.948 "code": -114, 00:21:41.948 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:41.948 } 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:41.948 17:30:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.326 0 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4135350 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4135350 ']' 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4135350 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4135350 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4135350' 00:21:43.326 killing process with pid 4135350 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4135350 00:21:43.326 17:30:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4135350 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:43.326 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:43.326 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.326 [2024-07-12 17:29:59.468861] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:43.326 [2024-07-12 17:29:59.468910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135350 ] 00:21:43.326 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.326 [2024-07-12 17:29:59.523653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.326 [2024-07-12 17:29:59.598752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.326 [2024-07-12 17:30:00.642022] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name fd2d6ecd-995a-41a3-8917-55fc255aaf00 already exists 00:21:43.326 [2024-07-12 17:30:00.642050] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:fd2d6ecd-995a-41a3-8917-55fc255aaf00 alias for bdev NVMe1n1 00:21:43.326 [2024-07-12 17:30:00.642058] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:43.326 Running I/O for 1 seconds... 00:21:43.326 00:21:43.326 Latency(us) 00:21:43.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.327 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:43.327 NVMe0n1 : 1.00 24816.93 96.94 0.00 0.00 5151.22 1517.30 9061.06 00:21:43.327 =================================================================================================================== 00:21:43.327 Total : 24816.93 96.94 0.00 0.00 5151.22 1517.30 9061.06 00:21:43.327 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.327 00:21:43.327 Latency(us) 00:21:43.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.327 =================================================================================================================== 00:21:43.327 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.327 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.327 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.327 rmmod nvme_tcp 00:21:43.327 rmmod nvme_fabrics 00:21:43.327 rmmod nvme_keyring 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4135103 ']' 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4135103 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4135103 ']' 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4135103 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4135103 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4135103' 00:21:43.586 killing process with pid 4135103 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4135103 00:21:43.586 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4135103 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.845 17:30:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.741 17:30:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.741 00:21:45.741 real 0m11.558s 00:21:45.741 user 0m15.825s 00:21:45.741 sys 0m4.746s 00:21:45.741 17:30:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.741 17:30:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.741 ************************************ 00:21:45.741 END TEST nvmf_multicontroller 00:21:45.741 ************************************ 00:21:45.741 17:30:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:45.741 17:30:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:45.741 17:30:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.741 17:30:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.741 17:30:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.999 ************************************ 00:21:45.999 START TEST nvmf_aer 00:21:45.999 ************************************ 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:45.999 * Looking for test storage... 00:21:45.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.999 17:30:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:51.266 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.267 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.267 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.267 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.267 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.267 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:51.526 00:21:51.526 --- 10.0.0.2 ping statistics --- 00:21:51.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.526 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:21:51.526 00:21:51.526 --- 10.0.0.1 ping statistics --- 00:21:51.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.526 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.526 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4139851 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4139851 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 4139851 ']' 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.785 17:30:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.785 [2024-07-12 17:30:10.354967] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:21:51.785 [2024-07-12 17:30:10.355010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.785 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.785 [2024-07-12 17:30:10.411783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.785 [2024-07-12 17:30:10.491226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.785 [2024-07-12 17:30:10.491265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.785 [2024-07-12 17:30:10.491272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.785 [2024-07-12 17:30:10.491278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.785 [2024-07-12 17:30:10.491283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.785 [2024-07-12 17:30:10.491338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.785 [2024-07-12 17:30:10.491436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.785 [2024-07-12 17:30:10.491459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.785 [2024-07-12 17:30:10.491461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 [2024-07-12 17:30:11.199365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 Malloc0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 [2024-07-12 17:30:11.251327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 [ 00:21:52.775 { 00:21:52.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.775 "subtype": "Discovery", 00:21:52.775 "listen_addresses": [], 00:21:52.775 "allow_any_host": true, 00:21:52.775 "hosts": [] 00:21:52.775 }, 00:21:52.775 { 00:21:52.775 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.775 "subtype": "NVMe", 00:21:52.775 "listen_addresses": [ 00:21:52.775 { 00:21:52.775 "trtype": "TCP", 00:21:52.775 "adrfam": "IPv4", 00:21:52.775 "traddr": "10.0.0.2", 00:21:52.775 "trsvcid": "4420" 00:21:52.775 } 00:21:52.775 ], 00:21:52.775 "allow_any_host": true, 00:21:52.775 "hosts": [], 00:21:52.775 "serial_number": "SPDK00000000000001", 00:21:52.775 "model_number": "SPDK bdev Controller", 00:21:52.775 "max_namespaces": 2, 00:21:52.775 "min_cntlid": 1, 00:21:52.775 "max_cntlid": 65519, 00:21:52.775 "namespaces": [ 00:21:52.775 { 00:21:52.775 "nsid": 1, 00:21:52.775 "bdev_name": "Malloc0", 00:21:52.775 "name": "Malloc0", 00:21:52.775 "nguid": "6BC05DF6568B4BF68707D29C23F760D4", 00:21:52.775 "uuid": "6bc05df6-568b-4bf6-8707-d29c23f760d4" 00:21:52.775 } 00:21:52.775 ] 00:21:52.775 } 00:21:52.775 ] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4139954 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:52.775 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 Malloc1 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.775 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.775 Asynchronous Event Request test 00:21:52.775 Attaching to 10.0.0.2 00:21:52.775 Attached to 10.0.0.2 00:21:52.775 Registering asynchronous event callbacks... 00:21:52.775 Starting namespace attribute notice tests for all controllers... 00:21:52.775 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:52.775 aer_cb - Changed Namespace 00:21:52.775 Cleaning up... 00:21:53.034 [ 00:21:53.034 { 00:21:53.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:53.034 "subtype": "Discovery", 00:21:53.034 "listen_addresses": [], 00:21:53.034 "allow_any_host": true, 00:21:53.034 "hosts": [] 00:21:53.034 }, 00:21:53.034 { 00:21:53.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.034 "subtype": "NVMe", 00:21:53.034 "listen_addresses": [ 00:21:53.034 { 00:21:53.034 "trtype": "TCP", 00:21:53.034 "adrfam": "IPv4", 00:21:53.034 "traddr": "10.0.0.2", 00:21:53.034 "trsvcid": "4420" 00:21:53.034 } 00:21:53.034 ], 00:21:53.034 "allow_any_host": true, 00:21:53.034 "hosts": [], 00:21:53.034 "serial_number": "SPDK00000000000001", 00:21:53.034 "model_number": "SPDK bdev Controller", 00:21:53.034 "max_namespaces": 2, 00:21:53.034 "min_cntlid": 1, 00:21:53.034 "max_cntlid": 65519, 00:21:53.034 "namespaces": [ 00:21:53.034 { 00:21:53.034 "nsid": 1, 00:21:53.034 "bdev_name": "Malloc0", 00:21:53.034 "name": "Malloc0", 00:21:53.034 "nguid": "6BC05DF6568B4BF68707D29C23F760D4", 00:21:53.034 "uuid": "6bc05df6-568b-4bf6-8707-d29c23f760d4" 00:21:53.034 }, 00:21:53.034 { 00:21:53.034 "nsid": 2, 00:21:53.034 "bdev_name": "Malloc1", 00:21:53.034 "name": "Malloc1", 00:21:53.034 "nguid": "28A820AD5A8E44ADAFB2D561F83D605D", 00:21:53.034 "uuid": "28a820ad-5a8e-44ad-afb2-d561f83d605d" 00:21:53.034 } 00:21:53.034 ] 00:21:53.034 } 00:21:53.034 ] 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4139954 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.034 rmmod nvme_tcp 00:21:53.034 rmmod nvme_fabrics 00:21:53.034 rmmod nvme_keyring 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4139851 ']' 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4139851 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 4139851 ']' 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 4139851 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4139851 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4139851' 00:21:53.034 killing process with pid 4139851 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 4139851 00:21:53.034 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 4139851 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.293 17:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.825 17:30:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.825 00:21:55.825 real 0m9.466s 00:21:55.825 user 0m7.292s 00:21:55.825 sys 0m4.722s 00:21:55.825 17:30:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.825 17:30:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.825 ************************************ 00:21:55.825 END TEST nvmf_aer 00:21:55.825 ************************************ 00:21:55.825 17:30:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:55.825 17:30:14 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.825 17:30:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.825 17:30:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.825 17:30:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.825 ************************************ 00:21:55.825 START TEST nvmf_async_init 00:21:55.825 ************************************ 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.825 * Looking for test storage... 00:21:55.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.825 17:30:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3388f5ea2ba74ec090f877b3e4722c84 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.826 17:30:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.094 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.095 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.095 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:01.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:01.095 00:22:01.095 --- 10.0.0.2 ping statistics --- 00:22:01.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.095 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:01.095 00:22:01.095 --- 10.0.0.1 ping statistics --- 00:22:01.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.095 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4143400 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4143400 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 4143400 ']' 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.095 17:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:01.095 [2024-07-12 17:30:19.523604] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:01.095 [2024-07-12 17:30:19.523646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.095 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.095 [2024-07-12 17:30:19.581799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.095 [2024-07-12 17:30:19.665278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.095 [2024-07-12 17:30:19.665311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.095 [2024-07-12 17:30:19.665319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.095 [2024-07-12 17:30:19.665325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.095 [2024-07-12 17:30:19.665330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.095 [2024-07-12 17:30:19.665363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 [2024-07-12 17:30:20.348996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 null0 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3388f5ea2ba74ec090f877b3e4722c84 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.662 [2024-07-12 17:30:20.389195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.662 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.919 nvme0n1 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.919 [ 00:22:01.919 { 00:22:01.919 "name": "nvme0n1", 00:22:01.919 "aliases": [ 00:22:01.919 "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84" 00:22:01.919 ], 00:22:01.919 "product_name": "NVMe disk", 00:22:01.919 "block_size": 512, 00:22:01.919 "num_blocks": 2097152, 00:22:01.919 "uuid": "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84", 00:22:01.919 "assigned_rate_limits": { 00:22:01.919 "rw_ios_per_sec": 0, 00:22:01.919 "rw_mbytes_per_sec": 0, 00:22:01.919 "r_mbytes_per_sec": 0, 00:22:01.919 "w_mbytes_per_sec": 0 00:22:01.919 }, 00:22:01.919 "claimed": false, 00:22:01.919 "zoned": false, 00:22:01.919 "supported_io_types": { 00:22:01.919 "read": true, 00:22:01.919 "write": true, 00:22:01.919 "unmap": false, 00:22:01.919 "flush": true, 00:22:01.919 "reset": true, 00:22:01.919 "nvme_admin": true, 00:22:01.919 "nvme_io": true, 00:22:01.919 "nvme_io_md": false, 00:22:01.919 "write_zeroes": true, 00:22:01.919 "zcopy": false, 00:22:01.919 "get_zone_info": false, 00:22:01.919 "zone_management": false, 00:22:01.919 "zone_append": false, 00:22:01.919 "compare": true, 00:22:01.919 "compare_and_write": true, 00:22:01.919 "abort": true, 00:22:01.919 "seek_hole": false, 00:22:01.919 "seek_data": false, 00:22:01.919 "copy": true, 00:22:01.919 "nvme_iov_md": false 00:22:01.919 }, 00:22:01.919 "memory_domains": [ 00:22:01.919 { 00:22:01.919 "dma_device_id": "system", 00:22:01.919 "dma_device_type": 1 00:22:01.919 } 00:22:01.919 ], 00:22:01.919 "driver_specific": { 00:22:01.919 "nvme": [ 00:22:01.919 { 00:22:01.919 "trid": { 00:22:01.919 "trtype": "TCP", 00:22:01.919 "adrfam": "IPv4", 00:22:01.919 "traddr": "10.0.0.2", 00:22:01.919 "trsvcid": "4420", 00:22:01.919 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:01.919 }, 00:22:01.919 "ctrlr_data": { 00:22:01.919 "cntlid": 1, 00:22:01.919 "vendor_id": "0x8086", 00:22:01.919 "model_number": "SPDK bdev Controller", 00:22:01.919 "serial_number": "00000000000000000000", 00:22:01.919 "firmware_revision": "24.09", 00:22:01.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:01.919 "oacs": { 00:22:01.919 "security": 0, 00:22:01.919 "format": 0, 00:22:01.919 "firmware": 0, 00:22:01.919 "ns_manage": 0 00:22:01.919 }, 00:22:01.919 "multi_ctrlr": true, 00:22:01.919 "ana_reporting": false 00:22:01.919 }, 00:22:01.919 "vs": { 00:22:01.919 "nvme_version": "1.3" 00:22:01.919 }, 00:22:01.919 "ns_data": { 00:22:01.919 "id": 1, 00:22:01.919 "can_share": true 00:22:01.919 } 00:22:01.919 } 00:22:01.919 ], 00:22:01.919 "mp_policy": "active_passive" 00:22:01.919 } 00:22:01.919 } 00:22:01.919 ] 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.919 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:01.919 [2024-07-12 17:30:20.637711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:01.919 [2024-07-12 17:30:20.637764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a19250 (9): Bad file descriptor 00:22:02.177 [2024-07-12 17:30:20.769462] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 [ 00:22:02.177 { 00:22:02.177 "name": "nvme0n1", 00:22:02.177 "aliases": [ 00:22:02.177 "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84" 00:22:02.177 ], 00:22:02.177 "product_name": "NVMe disk", 00:22:02.177 "block_size": 512, 00:22:02.177 "num_blocks": 2097152, 00:22:02.177 "uuid": "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84", 00:22:02.177 "assigned_rate_limits": { 00:22:02.177 "rw_ios_per_sec": 0, 00:22:02.177 "rw_mbytes_per_sec": 0, 00:22:02.177 "r_mbytes_per_sec": 0, 00:22:02.177 "w_mbytes_per_sec": 0 00:22:02.177 }, 00:22:02.177 "claimed": false, 00:22:02.177 "zoned": false, 00:22:02.177 "supported_io_types": { 00:22:02.177 "read": true, 00:22:02.177 "write": true, 00:22:02.177 "unmap": false, 00:22:02.177 "flush": true, 00:22:02.177 "reset": true, 00:22:02.177 "nvme_admin": true, 00:22:02.177 "nvme_io": true, 00:22:02.177 "nvme_io_md": false, 00:22:02.177 "write_zeroes": true, 00:22:02.177 "zcopy": false, 00:22:02.177 "get_zone_info": false, 00:22:02.177 "zone_management": false, 00:22:02.177 "zone_append": false, 00:22:02.177 "compare": true, 00:22:02.177 "compare_and_write": true, 00:22:02.177 "abort": true, 00:22:02.177 "seek_hole": false, 00:22:02.177 "seek_data": false, 00:22:02.177 "copy": true, 00:22:02.177 "nvme_iov_md": false 00:22:02.177 }, 00:22:02.177 "memory_domains": [ 00:22:02.177 { 00:22:02.177 "dma_device_id": "system", 00:22:02.177 "dma_device_type": 1 00:22:02.177 } 00:22:02.177 ], 00:22:02.177 "driver_specific": { 00:22:02.177 "nvme": [ 00:22:02.177 { 00:22:02.177 "trid": { 00:22:02.177 "trtype": "TCP", 00:22:02.177 "adrfam": "IPv4", 00:22:02.177 "traddr": "10.0.0.2", 00:22:02.177 "trsvcid": "4420", 00:22:02.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.177 }, 00:22:02.177 "ctrlr_data": { 00:22:02.177 "cntlid": 2, 00:22:02.177 "vendor_id": "0x8086", 00:22:02.177 "model_number": "SPDK bdev Controller", 00:22:02.177 "serial_number": "00000000000000000000", 00:22:02.177 "firmware_revision": "24.09", 00:22:02.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.177 "oacs": { 00:22:02.177 "security": 0, 00:22:02.177 "format": 0, 00:22:02.177 "firmware": 0, 00:22:02.177 "ns_manage": 0 00:22:02.177 }, 00:22:02.177 "multi_ctrlr": true, 00:22:02.177 "ana_reporting": false 00:22:02.177 }, 00:22:02.177 "vs": { 00:22:02.177 "nvme_version": "1.3" 00:22:02.177 }, 00:22:02.177 "ns_data": { 00:22:02.177 "id": 1, 00:22:02.177 "can_share": true 00:22:02.177 } 00:22:02.177 } 00:22:02.177 ], 00:22:02.177 "mp_policy": "active_passive" 00:22:02.177 } 00:22:02.177 } 00:22:02.177 ] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.U4sA5wo7cN 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.U4sA5wo7cN 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 [2024-07-12 17:30:20.818282] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.177 [2024-07-12 17:30:20.818386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U4sA5wo7cN 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 [2024-07-12 17:30:20.826300] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U4sA5wo7cN 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 [2024-07-12 17:30:20.834337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.177 [2024-07-12 17:30:20.834371] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:02.177 nvme0n1 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 [ 00:22:02.177 { 00:22:02.177 "name": "nvme0n1", 00:22:02.177 "aliases": [ 00:22:02.177 "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84" 00:22:02.177 ], 00:22:02.177 "product_name": "NVMe disk", 00:22:02.177 "block_size": 512, 00:22:02.177 "num_blocks": 2097152, 00:22:02.177 "uuid": "3388f5ea-2ba7-4ec0-90f8-77b3e4722c84", 00:22:02.177 "assigned_rate_limits": { 00:22:02.177 "rw_ios_per_sec": 0, 00:22:02.177 "rw_mbytes_per_sec": 0, 00:22:02.177 "r_mbytes_per_sec": 0, 00:22:02.177 "w_mbytes_per_sec": 0 00:22:02.177 }, 00:22:02.177 "claimed": false, 00:22:02.177 "zoned": false, 00:22:02.177 "supported_io_types": { 00:22:02.177 "read": true, 00:22:02.177 "write": true, 00:22:02.177 "unmap": false, 00:22:02.177 "flush": true, 00:22:02.177 "reset": true, 00:22:02.177 "nvme_admin": true, 00:22:02.177 "nvme_io": true, 00:22:02.177 "nvme_io_md": false, 00:22:02.177 "write_zeroes": true, 00:22:02.177 "zcopy": false, 00:22:02.177 "get_zone_info": false, 00:22:02.177 "zone_management": false, 00:22:02.177 "zone_append": false, 00:22:02.177 "compare": true, 00:22:02.177 "compare_and_write": true, 00:22:02.177 "abort": true, 00:22:02.177 "seek_hole": false, 00:22:02.177 "seek_data": false, 00:22:02.177 "copy": true, 00:22:02.177 "nvme_iov_md": false 00:22:02.177 }, 00:22:02.177 "memory_domains": [ 00:22:02.177 { 00:22:02.177 "dma_device_id": "system", 00:22:02.177 "dma_device_type": 1 00:22:02.177 } 00:22:02.177 ], 00:22:02.177 "driver_specific": { 00:22:02.177 "nvme": [ 00:22:02.177 { 00:22:02.177 "trid": { 00:22:02.177 "trtype": "TCP", 00:22:02.177 "adrfam": "IPv4", 00:22:02.177 "traddr": "10.0.0.2", 00:22:02.177 "trsvcid": "4421", 00:22:02.177 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.177 }, 00:22:02.177 "ctrlr_data": { 00:22:02.177 "cntlid": 3, 00:22:02.177 "vendor_id": "0x8086", 00:22:02.177 "model_number": "SPDK bdev Controller", 00:22:02.177 "serial_number": "00000000000000000000", 00:22:02.177 "firmware_revision": "24.09", 00:22:02.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.177 "oacs": { 00:22:02.177 "security": 0, 00:22:02.177 "format": 0, 00:22:02.177 "firmware": 0, 00:22:02.177 "ns_manage": 0 00:22:02.177 }, 00:22:02.177 "multi_ctrlr": true, 00:22:02.177 "ana_reporting": false 00:22:02.177 }, 00:22:02.177 "vs": { 00:22:02.177 "nvme_version": "1.3" 00:22:02.177 }, 00:22:02.177 "ns_data": { 00:22:02.177 "id": 1, 00:22:02.177 "can_share": true 00:22:02.177 } 00:22:02.177 } 00:22:02.177 ], 00:22:02.177 "mp_policy": "active_passive" 00:22:02.177 } 00:22:02.177 } 00:22:02.177 ] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.U4sA5wo7cN 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.177 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.177 rmmod nvme_tcp 00:22:02.434 rmmod nvme_fabrics 00:22:02.434 rmmod nvme_keyring 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4143400 ']' 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4143400 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 4143400 ']' 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 4143400 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.434 17:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4143400 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4143400' 00:22:02.434 killing process with pid 4143400 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 4143400 00:22:02.434 [2024-07-12 17:30:21.036018] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:02.434 [2024-07-12 17:30:21.036040] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 4143400 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.434 17:30:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.961 17:30:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.961 00:22:04.961 real 0m9.191s 00:22:04.961 user 0m3.323s 00:22:04.961 sys 0m4.284s 00:22:04.961 17:30:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.961 17:30:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.961 ************************************ 00:22:04.961 END TEST nvmf_async_init 00:22:04.961 ************************************ 00:22:04.961 17:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:04.961 17:30:23 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.961 17:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:04.961 17:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.961 17:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.961 ************************************ 00:22:04.961 START TEST dma 00:22:04.961 ************************************ 00:22:04.961 17:30:23 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.961 * Looking for test storage... 00:22:04.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.961 17:30:23 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.961 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.962 17:30:23 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.962 17:30:23 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.962 17:30:23 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.962 17:30:23 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:04.962 17:30:23 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.962 17:30:23 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.962 17:30:23 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:04.962 17:30:23 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:04.962 00:22:04.962 real 0m0.106s 00:22:04.962 user 0m0.043s 00:22:04.962 sys 0m0.070s 00:22:04.962 17:30:23 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:04.962 17:30:23 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:04.962 ************************************ 00:22:04.962 END TEST dma 00:22:04.962 ************************************ 00:22:04.962 17:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:04.962 17:30:23 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:04.962 17:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:04.962 17:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:04.962 17:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.962 ************************************ 00:22:04.962 START TEST nvmf_identify 00:22:04.962 ************************************ 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:04.962 * Looking for test storage... 00:22:04.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.962 17:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.221 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.221 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.221 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:10.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:22:10.222 00:22:10.222 --- 10.0.0.2 ping statistics --- 00:22:10.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.222 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:10.222 00:22:10.222 --- 10.0.0.1 ping statistics --- 00:22:10.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.222 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4147204 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4147204 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 4147204 ']' 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.222 17:30:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.222 [2024-07-12 17:30:28.556548] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:10.222 [2024-07-12 17:30:28.556591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.222 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.222 [2024-07-12 17:30:28.613891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.222 [2024-07-12 17:30:28.694777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.222 [2024-07-12 17:30:28.694815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.222 [2024-07-12 17:30:28.694822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.222 [2024-07-12 17:30:28.694830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.222 [2024-07-12 17:30:28.694835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.222 [2024-07-12 17:30:28.694876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.222 [2024-07-12 17:30:28.694972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.222 [2024-07-12 17:30:28.695034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.222 [2024-07-12 17:30:28.695035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 [2024-07-12 17:30:29.365104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 Malloc0 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 [2024-07-12 17:30:29.449070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.789 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:10.789 [ 00:22:10.789 { 00:22:10.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:10.789 "subtype": "Discovery", 00:22:10.789 "listen_addresses": [ 00:22:10.789 { 00:22:10.789 "trtype": "TCP", 00:22:10.789 "adrfam": "IPv4", 00:22:10.789 "traddr": "10.0.0.2", 00:22:10.789 "trsvcid": "4420" 00:22:10.789 } 00:22:10.789 ], 00:22:10.789 "allow_any_host": true, 00:22:10.789 "hosts": [] 00:22:10.789 }, 00:22:10.789 { 00:22:10.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.789 "subtype": "NVMe", 00:22:10.789 "listen_addresses": [ 00:22:10.789 { 00:22:10.789 "trtype": "TCP", 00:22:10.789 "adrfam": "IPv4", 00:22:10.790 "traddr": "10.0.0.2", 00:22:10.790 "trsvcid": "4420" 00:22:10.790 } 00:22:10.790 ], 00:22:10.790 "allow_any_host": true, 00:22:10.790 "hosts": [], 00:22:10.790 "serial_number": "SPDK00000000000001", 00:22:10.790 "model_number": "SPDK bdev Controller", 00:22:10.790 "max_namespaces": 32, 00:22:10.790 "min_cntlid": 1, 00:22:10.790 "max_cntlid": 65519, 00:22:10.790 "namespaces": [ 00:22:10.790 { 00:22:10.790 "nsid": 1, 00:22:10.790 "bdev_name": "Malloc0", 00:22:10.790 "name": "Malloc0", 00:22:10.790 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:10.790 "eui64": "ABCDEF0123456789", 00:22:10.790 "uuid": "9ab51626-c8e0-4fc1-90f4-0b8914552041" 00:22:10.790 } 00:22:10.790 ] 00:22:10.790 } 00:22:10.790 ] 00:22:10.790 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.790 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:10.790 [2024-07-12 17:30:29.500406] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:10.790 [2024-07-12 17:30:29.500437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147252 ] 00:22:10.790 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.790 [2024-07-12 17:30:29.529932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:10.790 [2024-07-12 17:30:29.529979] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:10.790 [2024-07-12 17:30:29.529984] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:10.790 [2024-07-12 17:30:29.529994] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:10.790 [2024-07-12 17:30:29.530000] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:10.790 [2024-07-12 17:30:29.530363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:10.790 [2024-07-12 17:30:29.530400] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1029ec0 0 00:22:10.790 [2024-07-12 17:30:29.544389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:10.790 [2024-07-12 17:30:29.544400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:10.790 [2024-07-12 17:30:29.544405] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:10.790 [2024-07-12 17:30:29.544408] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:10.790 [2024-07-12 17:30:29.544442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.544449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.544452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.544464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:10.790 [2024-07-12 17:30:29.544479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.552388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.552397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.552400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.552414] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:10.790 [2024-07-12 17:30:29.552420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:10.790 [2024-07-12 17:30:29.552425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:10.790 [2024-07-12 17:30:29.552441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.552455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.552466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.552562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.552568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.552571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.552579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:10.790 [2024-07-12 17:30:29.552585] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:10.790 [2024-07-12 17:30:29.552591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.552604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.552614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.552684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.552689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.552693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.552700] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:10.790 [2024-07-12 17:30:29.552706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.552712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.552724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.552733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.552800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.552806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.552809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552812] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.552817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.552824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.552838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.552848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.552919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.552925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.552928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.552931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.552935] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:10.790 [2024-07-12 17:30:29.552939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.552945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.553050] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:10.790 [2024-07-12 17:30:29.553054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.553062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.553074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.553083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.553152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.553158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.553161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.553168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:10.790 [2024-07-12 17:30:29.553175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553179] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.790 [2024-07-12 17:30:29.553187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.790 [2024-07-12 17:30:29.553196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.790 [2024-07-12 17:30:29.553270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.790 [2024-07-12 17:30:29.553276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.790 [2024-07-12 17:30:29.553279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.790 [2024-07-12 17:30:29.553282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.790 [2024-07-12 17:30:29.553286] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:10.790 [2024-07-12 17:30:29.553290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:10.790 [2024-07-12 17:30:29.553298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:10.791 [2024-07-12 17:30:29.553305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:10.791 [2024-07-12 17:30:29.553314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.791 [2024-07-12 17:30:29.553332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.791 [2024-07-12 17:30:29.553432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:10.791 [2024-07-12 17:30:29.553438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:10.791 [2024-07-12 17:30:29.553441] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553445] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1029ec0): datao=0, datal=4096, cccid=0 00:22:10.791 [2024-07-12 17:30:29.553449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ace40) on tqpair(0x1029ec0): expected_datao=0, payload_size=4096 00:22:10.791 [2024-07-12 17:30:29.553453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553476] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553480] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.791 [2024-07-12 17:30:29.553525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.791 [2024-07-12 17:30:29.553527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.791 [2024-07-12 17:30:29.553538] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:10.791 [2024-07-12 17:30:29.553545] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:10.791 [2024-07-12 17:30:29.553549] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:10.791 [2024-07-12 17:30:29.553553] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:10.791 [2024-07-12 17:30:29.553558] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:10.791 [2024-07-12 17:30:29.553562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:10.791 [2024-07-12 17:30:29.553570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:10.791 [2024-07-12 17:30:29.553577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:10.791 [2024-07-12 17:30:29.553600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.791 [2024-07-12 17:30:29.553674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.791 [2024-07-12 17:30:29.553679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.791 [2024-07-12 17:30:29.553682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:10.791 [2024-07-12 17:30:29.553694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.791 [2024-07-12 17:30:29.553711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553717] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.791 [2024-07-12 17:30:29.553726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.791 [2024-07-12 17:30:29.553742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.791 [2024-07-12 17:30:29.553757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:10.791 [2024-07-12 17:30:29.553767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:10.791 [2024-07-12 17:30:29.553773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.791 [2024-07-12 17:30:29.553792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace40, cid 0, qid 0 00:22:10.791 [2024-07-12 17:30:29.553797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acfc0, cid 1, qid 0 00:22:10.791 [2024-07-12 17:30:29.553801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad140, cid 2, qid 0 00:22:10.791 [2024-07-12 17:30:29.553805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:10.791 [2024-07-12 17:30:29.553809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad440, cid 4, qid 0 00:22:10.791 [2024-07-12 17:30:29.553919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.791 [2024-07-12 17:30:29.553924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.791 [2024-07-12 17:30:29.553927] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad440) on tqpair=0x1029ec0 00:22:10.791 [2024-07-12 17:30:29.553935] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:10.791 [2024-07-12 17:30:29.553939] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:10.791 [2024-07-12 17:30:29.553950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.553954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.553959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.791 [2024-07-12 17:30:29.553969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad440, cid 4, qid 0 00:22:10.791 [2024-07-12 17:30:29.554048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:10.791 [2024-07-12 17:30:29.554053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:10.791 [2024-07-12 17:30:29.554056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554059] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1029ec0): datao=0, datal=4096, cccid=4 00:22:10.791 [2024-07-12 17:30:29.554063] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad440) on tqpair(0x1029ec0): expected_datao=0, payload_size=4096 00:22:10.791 [2024-07-12 17:30:29.554067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554072] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554075] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.791 [2024-07-12 17:30:29.554096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.791 [2024-07-12 17:30:29.554099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad440) on tqpair=0x1029ec0 00:22:10.791 [2024-07-12 17:30:29.554112] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:10.791 [2024-07-12 17:30:29.554132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.554142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.791 [2024-07-12 17:30:29.554147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:10.791 [2024-07-12 17:30:29.554154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1029ec0) 00:22:10.791 [2024-07-12 17:30:29.554159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.791 [2024-07-12 17:30:29.554172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad440, cid 4, qid 0 00:22:10.792 [2024-07-12 17:30:29.554176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad5c0, cid 5, qid 0 00:22:10.792 [2024-07-12 17:30:29.554273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:10.792 [2024-07-12 17:30:29.554279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:10.792 [2024-07-12 17:30:29.554282] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:10.792 [2024-07-12 17:30:29.554285] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1029ec0): datao=0, datal=1024, cccid=4 00:22:10.792 [2024-07-12 17:30:29.554288] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad440) on tqpair(0x1029ec0): expected_datao=0, payload_size=1024 00:22:10.792 [2024-07-12 17:30:29.554292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:10.792 [2024-07-12 17:30:29.554297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:10.792 [2024-07-12 17:30:29.554300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:10.792 [2024-07-12 17:30:29.554305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:10.792 [2024-07-12 17:30:29.554312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:10.792 [2024-07-12 17:30:29.554315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:10.792 [2024-07-12 17:30:29.554318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad5c0) on tqpair=0x1029ec0 00:22:11.055 [2024-07-12 17:30:29.594539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.055 [2024-07-12 17:30:29.594551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.055 [2024-07-12 17:30:29.594554] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad440) on tqpair=0x1029ec0 00:22:11.055 [2024-07-12 17:30:29.594576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1029ec0) 00:22:11.055 [2024-07-12 17:30:29.594588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.055 [2024-07-12 17:30:29.594603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad440, cid 4, qid 0 00:22:11.055 [2024-07-12 17:30:29.594720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.055 [2024-07-12 17:30:29.594726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.055 [2024-07-12 17:30:29.594729] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594732] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1029ec0): datao=0, datal=3072, cccid=4 00:22:11.055 [2024-07-12 17:30:29.594736] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad440) on tqpair(0x1029ec0): expected_datao=0, payload_size=3072 00:22:11.055 [2024-07-12 17:30:29.594739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594755] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594759] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.055 [2024-07-12 17:30:29.594805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.055 [2024-07-12 17:30:29.594807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad440) on tqpair=0x1029ec0 00:22:11.055 [2024-07-12 17:30:29.594817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1029ec0) 00:22:11.055 [2024-07-12 17:30:29.594826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.055 [2024-07-12 17:30:29.594839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad440, cid 4, qid 0 00:22:11.055 [2024-07-12 17:30:29.594916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.055 [2024-07-12 17:30:29.594921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.055 [2024-07-12 17:30:29.594924] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594927] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1029ec0): datao=0, datal=8, cccid=4 00:22:11.055 [2024-07-12 17:30:29.594930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad440) on tqpair(0x1029ec0): expected_datao=0, payload_size=8 00:22:11.055 [2024-07-12 17:30:29.594934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594939] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.594942] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.635678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.055 [2024-07-12 17:30:29.635689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.055 [2024-07-12 17:30:29.635694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.055 [2024-07-12 17:30:29.635698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad440) on tqpair=0x1029ec0 00:22:11.055 ===================================================== 00:22:11.055 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:11.055 ===================================================== 00:22:11.055 Controller Capabilities/Features 00:22:11.055 ================================ 00:22:11.055 Vendor ID: 0000 00:22:11.055 Subsystem Vendor ID: 0000 00:22:11.055 Serial Number: .................... 00:22:11.055 Model Number: ........................................ 00:22:11.055 Firmware Version: 24.09 00:22:11.055 Recommended Arb Burst: 0 00:22:11.055 IEEE OUI Identifier: 00 00 00 00:22:11.055 Multi-path I/O 00:22:11.055 May have multiple subsystem ports: No 00:22:11.055 May have multiple controllers: No 00:22:11.055 Associated with SR-IOV VF: No 00:22:11.055 Max Data Transfer Size: 131072 00:22:11.055 Max Number of Namespaces: 0 00:22:11.055 Max Number of I/O Queues: 1024 00:22:11.055 NVMe Specification Version (VS): 1.3 00:22:11.055 NVMe Specification Version (Identify): 1.3 00:22:11.055 Maximum Queue Entries: 128 00:22:11.055 Contiguous Queues Required: Yes 00:22:11.055 Arbitration Mechanisms Supported 00:22:11.055 Weighted Round Robin: Not Supported 00:22:11.055 Vendor Specific: Not Supported 00:22:11.055 Reset Timeout: 15000 ms 00:22:11.055 Doorbell Stride: 4 bytes 00:22:11.055 NVM Subsystem Reset: Not Supported 00:22:11.055 Command Sets Supported 00:22:11.055 NVM Command Set: Supported 00:22:11.055 Boot Partition: Not Supported 00:22:11.055 Memory Page Size Minimum: 4096 bytes 00:22:11.055 Memory Page Size Maximum: 4096 bytes 00:22:11.055 Persistent Memory Region: Not Supported 00:22:11.055 Optional Asynchronous Events Supported 00:22:11.055 Namespace Attribute Notices: Not Supported 00:22:11.055 Firmware Activation Notices: Not Supported 00:22:11.055 ANA Change Notices: Not Supported 00:22:11.055 PLE Aggregate Log Change Notices: Not Supported 00:22:11.055 LBA Status Info Alert Notices: Not Supported 00:22:11.055 EGE Aggregate Log Change Notices: Not Supported 00:22:11.055 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.055 Zone Descriptor Change Notices: Not Supported 00:22:11.055 Discovery Log Change Notices: Supported 00:22:11.055 Controller Attributes 00:22:11.055 128-bit Host Identifier: Not Supported 00:22:11.055 Non-Operational Permissive Mode: Not Supported 00:22:11.055 NVM Sets: Not Supported 00:22:11.055 Read Recovery Levels: Not Supported 00:22:11.055 Endurance Groups: Not Supported 00:22:11.055 Predictable Latency Mode: Not Supported 00:22:11.055 Traffic Based Keep ALive: Not Supported 00:22:11.055 Namespace Granularity: Not Supported 00:22:11.055 SQ Associations: Not Supported 00:22:11.055 UUID List: Not Supported 00:22:11.055 Multi-Domain Subsystem: Not Supported 00:22:11.055 Fixed Capacity Management: Not Supported 00:22:11.055 Variable Capacity Management: Not Supported 00:22:11.055 Delete Endurance Group: Not Supported 00:22:11.055 Delete NVM Set: Not Supported 00:22:11.055 Extended LBA Formats Supported: Not Supported 00:22:11.055 Flexible Data Placement Supported: Not Supported 00:22:11.055 00:22:11.055 Controller Memory Buffer Support 00:22:11.055 ================================ 00:22:11.055 Supported: No 00:22:11.055 00:22:11.055 Persistent Memory Region Support 00:22:11.055 ================================ 00:22:11.055 Supported: No 00:22:11.055 00:22:11.055 Admin Command Set Attributes 00:22:11.055 ============================ 00:22:11.055 Security Send/Receive: Not Supported 00:22:11.055 Format NVM: Not Supported 00:22:11.055 Firmware Activate/Download: Not Supported 00:22:11.055 Namespace Management: Not Supported 00:22:11.055 Device Self-Test: Not Supported 00:22:11.055 Directives: Not Supported 00:22:11.055 NVMe-MI: Not Supported 00:22:11.055 Virtualization Management: Not Supported 00:22:11.055 Doorbell Buffer Config: Not Supported 00:22:11.055 Get LBA Status Capability: Not Supported 00:22:11.055 Command & Feature Lockdown Capability: Not Supported 00:22:11.056 Abort Command Limit: 1 00:22:11.056 Async Event Request Limit: 4 00:22:11.056 Number of Firmware Slots: N/A 00:22:11.056 Firmware Slot 1 Read-Only: N/A 00:22:11.056 Firmware Activation Without Reset: N/A 00:22:11.056 Multiple Update Detection Support: N/A 00:22:11.056 Firmware Update Granularity: No Information Provided 00:22:11.056 Per-Namespace SMART Log: No 00:22:11.056 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.056 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:11.056 Command Effects Log Page: Not Supported 00:22:11.056 Get Log Page Extended Data: Supported 00:22:11.056 Telemetry Log Pages: Not Supported 00:22:11.056 Persistent Event Log Pages: Not Supported 00:22:11.056 Supported Log Pages Log Page: May Support 00:22:11.056 Commands Supported & Effects Log Page: Not Supported 00:22:11.056 Feature Identifiers & Effects Log Page:May Support 00:22:11.056 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.056 Data Area 4 for Telemetry Log: Not Supported 00:22:11.056 Error Log Page Entries Supported: 128 00:22:11.056 Keep Alive: Not Supported 00:22:11.056 00:22:11.056 NVM Command Set Attributes 00:22:11.056 ========================== 00:22:11.056 Submission Queue Entry Size 00:22:11.056 Max: 1 00:22:11.056 Min: 1 00:22:11.056 Completion Queue Entry Size 00:22:11.056 Max: 1 00:22:11.056 Min: 1 00:22:11.056 Number of Namespaces: 0 00:22:11.056 Compare Command: Not Supported 00:22:11.056 Write Uncorrectable Command: Not Supported 00:22:11.056 Dataset Management Command: Not Supported 00:22:11.056 Write Zeroes Command: Not Supported 00:22:11.056 Set Features Save Field: Not Supported 00:22:11.056 Reservations: Not Supported 00:22:11.056 Timestamp: Not Supported 00:22:11.056 Copy: Not Supported 00:22:11.056 Volatile Write Cache: Not Present 00:22:11.056 Atomic Write Unit (Normal): 1 00:22:11.056 Atomic Write Unit (PFail): 1 00:22:11.056 Atomic Compare & Write Unit: 1 00:22:11.056 Fused Compare & Write: Supported 00:22:11.056 Scatter-Gather List 00:22:11.056 SGL Command Set: Supported 00:22:11.056 SGL Keyed: Supported 00:22:11.056 SGL Bit Bucket Descriptor: Not Supported 00:22:11.056 SGL Metadata Pointer: Not Supported 00:22:11.056 Oversized SGL: Not Supported 00:22:11.056 SGL Metadata Address: Not Supported 00:22:11.056 SGL Offset: Supported 00:22:11.056 Transport SGL Data Block: Not Supported 00:22:11.056 Replay Protected Memory Block: Not Supported 00:22:11.056 00:22:11.056 Firmware Slot Information 00:22:11.056 ========================= 00:22:11.056 Active slot: 0 00:22:11.056 00:22:11.056 00:22:11.056 Error Log 00:22:11.056 ========= 00:22:11.056 00:22:11.056 Active Namespaces 00:22:11.056 ================= 00:22:11.056 Discovery Log Page 00:22:11.056 ================== 00:22:11.056 Generation Counter: 2 00:22:11.056 Number of Records: 2 00:22:11.056 Record Format: 0 00:22:11.056 00:22:11.056 Discovery Log Entry 0 00:22:11.056 ---------------------- 00:22:11.056 Transport Type: 3 (TCP) 00:22:11.056 Address Family: 1 (IPv4) 00:22:11.056 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:11.056 Entry Flags: 00:22:11.056 Duplicate Returned Information: 1 00:22:11.056 Explicit Persistent Connection Support for Discovery: 1 00:22:11.056 Transport Requirements: 00:22:11.056 Secure Channel: Not Required 00:22:11.056 Port ID: 0 (0x0000) 00:22:11.056 Controller ID: 65535 (0xffff) 00:22:11.056 Admin Max SQ Size: 128 00:22:11.056 Transport Service Identifier: 4420 00:22:11.056 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:11.056 Transport Address: 10.0.0.2 00:22:11.056 Discovery Log Entry 1 00:22:11.056 ---------------------- 00:22:11.056 Transport Type: 3 (TCP) 00:22:11.056 Address Family: 1 (IPv4) 00:22:11.056 Subsystem Type: 2 (NVM Subsystem) 00:22:11.056 Entry Flags: 00:22:11.056 Duplicate Returned Information: 0 00:22:11.056 Explicit Persistent Connection Support for Discovery: 0 00:22:11.056 Transport Requirements: 00:22:11.056 Secure Channel: Not Required 00:22:11.056 Port ID: 0 (0x0000) 00:22:11.056 Controller ID: 65535 (0xffff) 00:22:11.056 Admin Max SQ Size: 128 00:22:11.056 Transport Service Identifier: 4420 00:22:11.056 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:11.056 Transport Address: 10.0.0.2 [2024-07-12 17:30:29.635773] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:11.056 [2024-07-12 17:30:29.635784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ace40) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.635790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.056 [2024-07-12 17:30:29.635795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10acfc0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.635799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.056 [2024-07-12 17:30:29.635802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad140) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.635806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.056 [2024-07-12 17:30:29.635811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.635814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.056 [2024-07-12 17:30:29.635824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.635827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.635830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.635837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.635850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.635922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.635927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.635930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.635934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.635940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.635943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.635946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.635952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.635964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.636071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.636077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.636080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.636087] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:11.056 [2024-07-12 17:30:29.636091] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:11.056 [2024-07-12 17:30:29.636100] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.636114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.636123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.636192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.636197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.636200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.636212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.636224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.636232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.636324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.636329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.636332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.636343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.636349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.636355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.636364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.640385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.640393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.640396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.640399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.640409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.640412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.640415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1029ec0) 00:22:11.056 [2024-07-12 17:30:29.640421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.056 [2024-07-12 17:30:29.640432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad2c0, cid 3, qid 0 00:22:11.056 [2024-07-12 17:30:29.640622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.056 [2024-07-12 17:30:29.640628] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.056 [2024-07-12 17:30:29.640631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.056 [2024-07-12 17:30:29.640634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10ad2c0) on tqpair=0x1029ec0 00:22:11.056 [2024-07-12 17:30:29.640640] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:22:11.056 00:22:11.056 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:11.056 [2024-07-12 17:30:29.676963] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:11.056 [2024-07-12 17:30:29.677000] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147350 ] 00:22:11.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.056 [2024-07-12 17:30:29.706613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:11.056 [2024-07-12 17:30:29.706655] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:11.056 [2024-07-12 17:30:29.706659] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:11.056 [2024-07-12 17:30:29.706674] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:11.056 [2024-07-12 17:30:29.706679] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:11.056 [2024-07-12 17:30:29.706875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:11.056 [2024-07-12 17:30:29.706898] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4a2ec0 0 00:22:11.056 [2024-07-12 17:30:29.721388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:11.056 [2024-07-12 17:30:29.721400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:11.057 [2024-07-12 17:30:29.721404] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:11.057 [2024-07-12 17:30:29.721407] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:11.057 [2024-07-12 17:30:29.721435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.721439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.721443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.721453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:11.057 [2024-07-12 17:30:29.721468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.729390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.729400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.729404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.729416] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:11.057 [2024-07-12 17:30:29.729421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:11.057 [2024-07-12 17:30:29.729426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:11.057 [2024-07-12 17:30:29.729438] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.729451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.729464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.729559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.729565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.729571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.729578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:11.057 [2024-07-12 17:30:29.729585] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:11.057 [2024-07-12 17:30:29.729591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.729604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.729614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.729707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.729713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.729716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.729724] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:11.057 [2024-07-12 17:30:29.729730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.729736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729743] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.729748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.729757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.729859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.729865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.729868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.729875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.729883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.729889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.729895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.729905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.730010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.730015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.730018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.730025] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:11.057 [2024-07-12 17:30:29.730031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.730038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.730143] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:11.057 [2024-07-12 17:30:29.730146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.730152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.730174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.730241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.730247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.730249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.730257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:11.057 [2024-07-12 17:30:29.730265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.730286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.730414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.730420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.730423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.730430] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:11.057 [2024-07-12 17:30:29.730434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:11.057 [2024-07-12 17:30:29.730448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.057 [2024-07-12 17:30:29.730475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.730584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.057 [2024-07-12 17:30:29.730590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.057 [2024-07-12 17:30:29.730593] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730596] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=4096, cccid=0 00:22:11.057 [2024-07-12 17:30:29.730600] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x525e40) on tqpair(0x4a2ec0): expected_datao=0, payload_size=4096 00:22:11.057 [2024-07-12 17:30:29.730604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730610] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730614] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.730661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.730664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.730673] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:11.057 [2024-07-12 17:30:29.730679] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:11.057 [2024-07-12 17:30:29.730684] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:11.057 [2024-07-12 17:30:29.730687] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:11.057 [2024-07-12 17:30:29.730690] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:11.057 [2024-07-12 17:30:29.730695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730708] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.057 [2024-07-12 17:30:29.730730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.057 [2024-07-12 17:30:29.730808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.057 [2024-07-12 17:30:29.730813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.057 [2024-07-12 17:30:29.730816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.057 [2024-07-12 17:30:29.730825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.057 [2024-07-12 17:30:29.730842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.057 [2024-07-12 17:30:29.730859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.057 [2024-07-12 17:30:29.730875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.057 [2024-07-12 17:30:29.730890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:11.057 [2024-07-12 17:30:29.730905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.057 [2024-07-12 17:30:29.730909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.057 [2024-07-12 17:30:29.730914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.730925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525e40, cid 0, qid 0 00:22:11.058 [2024-07-12 17:30:29.730929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x525fc0, cid 1, qid 0 00:22:11.058 [2024-07-12 17:30:29.730933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526140, cid 2, qid 0 00:22:11.058 [2024-07-12 17:30:29.730937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.058 [2024-07-12 17:30:29.730941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.731063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.731069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.731072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.731079] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:11.058 [2024-07-12 17:30:29.731083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.058 [2024-07-12 17:30:29.731122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.731190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.731196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.731199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.731252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.731286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.731385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.058 [2024-07-12 17:30:29.731392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.058 [2024-07-12 17:30:29.731395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731398] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=4096, cccid=4 00:22:11.058 [2024-07-12 17:30:29.731402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x526440) on tqpair(0x4a2ec0): expected_datao=0, payload_size=4096 00:22:11.058 [2024-07-12 17:30:29.731406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731411] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.731444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.731447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.731460] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:11.058 [2024-07-12 17:30:29.731471] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731480] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.731506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.731606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.058 [2024-07-12 17:30:29.731611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.058 [2024-07-12 17:30:29.731614] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731617] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=4096, cccid=4 00:22:11.058 [2024-07-12 17:30:29.731621] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x526440) on tqpair(0x4a2ec0): expected_datao=0, payload_size=4096 00:22:11.058 [2024-07-12 17:30:29.731626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731667] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.731712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.731715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.731730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.731762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.731851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.058 [2024-07-12 17:30:29.731856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.058 [2024-07-12 17:30:29.731859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731862] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=4096, cccid=4 00:22:11.058 [2024-07-12 17:30:29.731866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x526440) on tqpair(0x4a2ec0): expected_datao=0, payload_size=4096 00:22:11.058 [2024-07-12 17:30:29.731870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731875] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731878] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.731898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.731901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.731910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731945] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:11.058 [2024-07-12 17:30:29.731950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:11.058 [2024-07-12 17:30:29.731955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:11.058 [2024-07-12 17:30:29.731967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.731982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.731988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.731993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.058 [2024-07-12 17:30:29.732005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.058 [2024-07-12 17:30:29.732009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5265c0, cid 5, qid 0 00:22:11.058 [2024-07-12 17:30:29.732143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.732149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.732152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.732155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.732160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.732165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.732168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.732171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5265c0) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.732180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.732183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.732188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.732198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5265c0, cid 5, qid 0 00:22:11.058 [2024-07-12 17:30:29.732281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.058 [2024-07-12 17:30:29.732288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.058 [2024-07-12 17:30:29.732290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.732294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5265c0) on tqpair=0x4a2ec0 00:22:11.058 [2024-07-12 17:30:29.732301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.058 [2024-07-12 17:30:29.732304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4a2ec0) 00:22:11.058 [2024-07-12 17:30:29.732310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.058 [2024-07-12 17:30:29.732319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5265c0, cid 5, qid 0 00:22:11.058 [2024-07-12 17:30:29.732432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.732439] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.732442] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5265c0) on tqpair=0x4a2ec0 00:22:11.059 [2024-07-12 17:30:29.732452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4a2ec0) 00:22:11.059 [2024-07-12 17:30:29.732463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.059 [2024-07-12 17:30:29.732473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5265c0, cid 5, qid 0 00:22:11.059 [2024-07-12 17:30:29.732540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.732546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.732549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5265c0) on tqpair=0x4a2ec0 00:22:11.059 [2024-07-12 17:30:29.732564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4a2ec0) 00:22:11.059 [2024-07-12 17:30:29.732573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.059 [2024-07-12 17:30:29.732579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4a2ec0) 00:22:11.059 [2024-07-12 17:30:29.732588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.059 [2024-07-12 17:30:29.732594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4a2ec0) 00:22:11.059 [2024-07-12 17:30:29.732602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.059 [2024-07-12 17:30:29.732608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4a2ec0) 00:22:11.059 [2024-07-12 17:30:29.732616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.059 [2024-07-12 17:30:29.732627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5265c0, cid 5, qid 0 00:22:11.059 [2024-07-12 17:30:29.732631] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526440, cid 4, qid 0 00:22:11.059 [2024-07-12 17:30:29.732635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x526740, cid 6, qid 0 00:22:11.059 [2024-07-12 17:30:29.732639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5268c0, cid 7, qid 0 00:22:11.059 [2024-07-12 17:30:29.732782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.059 [2024-07-12 17:30:29.732788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.059 [2024-07-12 17:30:29.732791] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732794] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=8192, cccid=5 00:22:11.059 [2024-07-12 17:30:29.732798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5265c0) on tqpair(0x4a2ec0): expected_datao=0, payload_size=8192 00:22:11.059 [2024-07-12 17:30:29.732802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732860] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732864] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.059 [2024-07-12 17:30:29.732874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.059 [2024-07-12 17:30:29.732877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732881] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=512, cccid=4 00:22:11.059 [2024-07-12 17:30:29.732885] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x526440) on tqpair(0x4a2ec0): expected_datao=0, payload_size=512 00:22:11.059 [2024-07-12 17:30:29.732889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732895] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732898] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.059 [2024-07-12 17:30:29.732907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.059 [2024-07-12 17:30:29.732910] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732913] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=512, cccid=6 00:22:11.059 [2024-07-12 17:30:29.732917] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x526740) on tqpair(0x4a2ec0): expected_datao=0, payload_size=512 00:22:11.059 [2024-07-12 17:30:29.732920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732926] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732929] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.059 [2024-07-12 17:30:29.732938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.059 [2024-07-12 17:30:29.732941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4a2ec0): datao=0, datal=4096, cccid=7 00:22:11.059 [2024-07-12 17:30:29.732948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5268c0) on tqpair(0x4a2ec0): expected_datao=0, payload_size=4096 00:22:11.059 [2024-07-12 17:30:29.732952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732957] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732960] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.732973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.732976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.732979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5265c0) on tqpair=0x4a2ec0 00:22:11.059 [2024-07-12 17:30:29.732989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.732994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.732997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.733001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526440) on tqpair=0x4a2ec0 00:22:11.059 [2024-07-12 17:30:29.733009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.733014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.733017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.733020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526740) on tqpair=0x4a2ec0 00:22:11.059 [2024-07-12 17:30:29.733026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.059 [2024-07-12 17:30:29.733031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.059 [2024-07-12 17:30:29.733034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.059 [2024-07-12 17:30:29.733037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5268c0) on tqpair=0x4a2ec0 00:22:11.059 ===================================================== 00:22:11.059 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.059 ===================================================== 00:22:11.059 Controller Capabilities/Features 00:22:11.059 ================================ 00:22:11.059 Vendor ID: 8086 00:22:11.059 Subsystem Vendor ID: 8086 00:22:11.059 Serial Number: SPDK00000000000001 00:22:11.059 Model Number: SPDK bdev Controller 00:22:11.059 Firmware Version: 24.09 00:22:11.059 Recommended Arb Burst: 6 00:22:11.059 IEEE OUI Identifier: e4 d2 5c 00:22:11.059 Multi-path I/O 00:22:11.059 May have multiple subsystem ports: Yes 00:22:11.059 May have multiple controllers: Yes 00:22:11.059 Associated with SR-IOV VF: No 00:22:11.059 Max Data Transfer Size: 131072 00:22:11.059 Max Number of Namespaces: 32 00:22:11.059 Max Number of I/O Queues: 127 00:22:11.059 NVMe Specification Version (VS): 1.3 00:22:11.059 NVMe Specification Version (Identify): 1.3 00:22:11.059 Maximum Queue Entries: 128 00:22:11.059 Contiguous Queues Required: Yes 00:22:11.059 Arbitration Mechanisms Supported 00:22:11.059 Weighted Round Robin: Not Supported 00:22:11.059 Vendor Specific: Not Supported 00:22:11.059 Reset Timeout: 15000 ms 00:22:11.059 Doorbell Stride: 4 bytes 00:22:11.059 NVM Subsystem Reset: Not Supported 00:22:11.059 Command Sets Supported 00:22:11.059 NVM Command Set: Supported 00:22:11.059 Boot Partition: Not Supported 00:22:11.059 Memory Page Size Minimum: 4096 bytes 00:22:11.059 Memory Page Size Maximum: 4096 bytes 00:22:11.059 Persistent Memory Region: Not Supported 00:22:11.059 Optional Asynchronous Events Supported 00:22:11.059 Namespace Attribute Notices: Supported 00:22:11.059 Firmware Activation Notices: Not Supported 00:22:11.059 ANA Change Notices: Not Supported 00:22:11.059 PLE Aggregate Log Change Notices: Not Supported 00:22:11.059 LBA Status Info Alert Notices: Not Supported 00:22:11.059 EGE Aggregate Log Change Notices: Not Supported 00:22:11.059 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.059 Zone Descriptor Change Notices: Not Supported 00:22:11.059 Discovery Log Change Notices: Not Supported 00:22:11.059 Controller Attributes 00:22:11.059 128-bit Host Identifier: Supported 00:22:11.059 Non-Operational Permissive Mode: Not Supported 00:22:11.059 NVM Sets: Not Supported 00:22:11.059 Read Recovery Levels: Not Supported 00:22:11.059 Endurance Groups: Not Supported 00:22:11.059 Predictable Latency Mode: Not Supported 00:22:11.059 Traffic Based Keep ALive: Not Supported 00:22:11.059 Namespace Granularity: Not Supported 00:22:11.059 SQ Associations: Not Supported 00:22:11.059 UUID List: Not Supported 00:22:11.059 Multi-Domain Subsystem: Not Supported 00:22:11.059 Fixed Capacity Management: Not Supported 00:22:11.059 Variable Capacity Management: Not Supported 00:22:11.059 Delete Endurance Group: Not Supported 00:22:11.059 Delete NVM Set: Not Supported 00:22:11.059 Extended LBA Formats Supported: Not Supported 00:22:11.059 Flexible Data Placement Supported: Not Supported 00:22:11.059 00:22:11.059 Controller Memory Buffer Support 00:22:11.059 ================================ 00:22:11.059 Supported: No 00:22:11.059 00:22:11.059 Persistent Memory Region Support 00:22:11.059 ================================ 00:22:11.059 Supported: No 00:22:11.059 00:22:11.059 Admin Command Set Attributes 00:22:11.059 ============================ 00:22:11.059 Security Send/Receive: Not Supported 00:22:11.059 Format NVM: Not Supported 00:22:11.059 Firmware Activate/Download: Not Supported 00:22:11.059 Namespace Management: Not Supported 00:22:11.059 Device Self-Test: Not Supported 00:22:11.059 Directives: Not Supported 00:22:11.059 NVMe-MI: Not Supported 00:22:11.059 Virtualization Management: Not Supported 00:22:11.059 Doorbell Buffer Config: Not Supported 00:22:11.059 Get LBA Status Capability: Not Supported 00:22:11.059 Command & Feature Lockdown Capability: Not Supported 00:22:11.059 Abort Command Limit: 4 00:22:11.059 Async Event Request Limit: 4 00:22:11.059 Number of Firmware Slots: N/A 00:22:11.059 Firmware Slot 1 Read-Only: N/A 00:22:11.059 Firmware Activation Without Reset: N/A 00:22:11.059 Multiple Update Detection Support: N/A 00:22:11.059 Firmware Update Granularity: No Information Provided 00:22:11.059 Per-Namespace SMART Log: No 00:22:11.059 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.059 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:11.059 Command Effects Log Page: Supported 00:22:11.059 Get Log Page Extended Data: Supported 00:22:11.059 Telemetry Log Pages: Not Supported 00:22:11.059 Persistent Event Log Pages: Not Supported 00:22:11.059 Supported Log Pages Log Page: May Support 00:22:11.059 Commands Supported & Effects Log Page: Not Supported 00:22:11.059 Feature Identifiers & Effects Log Page:May Support 00:22:11.059 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.059 Data Area 4 for Telemetry Log: Not Supported 00:22:11.059 Error Log Page Entries Supported: 128 00:22:11.059 Keep Alive: Supported 00:22:11.059 Keep Alive Granularity: 10000 ms 00:22:11.059 00:22:11.059 NVM Command Set Attributes 00:22:11.059 ========================== 00:22:11.059 Submission Queue Entry Size 00:22:11.059 Max: 64 00:22:11.059 Min: 64 00:22:11.059 Completion Queue Entry Size 00:22:11.059 Max: 16 00:22:11.059 Min: 16 00:22:11.059 Number of Namespaces: 32 00:22:11.059 Compare Command: Supported 00:22:11.059 Write Uncorrectable Command: Not Supported 00:22:11.059 Dataset Management Command: Supported 00:22:11.059 Write Zeroes Command: Supported 00:22:11.059 Set Features Save Field: Not Supported 00:22:11.059 Reservations: Supported 00:22:11.059 Timestamp: Not Supported 00:22:11.059 Copy: Supported 00:22:11.059 Volatile Write Cache: Present 00:22:11.059 Atomic Write Unit (Normal): 1 00:22:11.059 Atomic Write Unit (PFail): 1 00:22:11.059 Atomic Compare & Write Unit: 1 00:22:11.059 Fused Compare & Write: Supported 00:22:11.059 Scatter-Gather List 00:22:11.059 SGL Command Set: Supported 00:22:11.059 SGL Keyed: Supported 00:22:11.059 SGL Bit Bucket Descriptor: Not Supported 00:22:11.060 SGL Metadata Pointer: Not Supported 00:22:11.060 Oversized SGL: Not Supported 00:22:11.060 SGL Metadata Address: Not Supported 00:22:11.060 SGL Offset: Supported 00:22:11.060 Transport SGL Data Block: Not Supported 00:22:11.060 Replay Protected Memory Block: Not Supported 00:22:11.060 00:22:11.060 Firmware Slot Information 00:22:11.060 ========================= 00:22:11.060 Active slot: 1 00:22:11.060 Slot 1 Firmware Revision: 24.09 00:22:11.060 00:22:11.060 00:22:11.060 Commands Supported and Effects 00:22:11.060 ============================== 00:22:11.060 Admin Commands 00:22:11.060 -------------- 00:22:11.060 Get Log Page (02h): Supported 00:22:11.060 Identify (06h): Supported 00:22:11.060 Abort (08h): Supported 00:22:11.060 Set Features (09h): Supported 00:22:11.060 Get Features (0Ah): Supported 00:22:11.060 Asynchronous Event Request (0Ch): Supported 00:22:11.060 Keep Alive (18h): Supported 00:22:11.060 I/O Commands 00:22:11.060 ------------ 00:22:11.060 Flush (00h): Supported LBA-Change 00:22:11.060 Write (01h): Supported LBA-Change 00:22:11.060 Read (02h): Supported 00:22:11.060 Compare (05h): Supported 00:22:11.060 Write Zeroes (08h): Supported LBA-Change 00:22:11.060 Dataset Management (09h): Supported LBA-Change 00:22:11.060 Copy (19h): Supported LBA-Change 00:22:11.060 00:22:11.060 Error Log 00:22:11.060 ========= 00:22:11.060 00:22:11.060 Arbitration 00:22:11.060 =========== 00:22:11.060 Arbitration Burst: 1 00:22:11.060 00:22:11.060 Power Management 00:22:11.060 ================ 00:22:11.060 Number of Power States: 1 00:22:11.060 Current Power State: Power State #0 00:22:11.060 Power State #0: 00:22:11.060 Max Power: 0.00 W 00:22:11.060 Non-Operational State: Operational 00:22:11.060 Entry Latency: Not Reported 00:22:11.060 Exit Latency: Not Reported 00:22:11.060 Relative Read Throughput: 0 00:22:11.060 Relative Read Latency: 0 00:22:11.060 Relative Write Throughput: 0 00:22:11.060 Relative Write Latency: 0 00:22:11.060 Idle Power: Not Reported 00:22:11.060 Active Power: Not Reported 00:22:11.060 Non-Operational Permissive Mode: Not Supported 00:22:11.060 00:22:11.060 Health Information 00:22:11.060 ================== 00:22:11.060 Critical Warnings: 00:22:11.060 Available Spare Space: OK 00:22:11.060 Temperature: OK 00:22:11.060 Device Reliability: OK 00:22:11.060 Read Only: No 00:22:11.060 Volatile Memory Backup: OK 00:22:11.060 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:11.060 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:11.060 Available Spare: 0% 00:22:11.060 Available Spare Threshold: 0% 00:22:11.060 Life Percentage Used:[2024-07-12 17:30:29.733116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.733122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.733128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.733139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5268c0, cid 7, qid 0 00:22:11.060 [2024-07-12 17:30:29.733217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.733223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.733226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.733229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5268c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.733257] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:11.060 [2024-07-12 17:30:29.733266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525e40) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.733271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.060 [2024-07-12 17:30:29.733276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x525fc0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.733280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.060 [2024-07-12 17:30:29.733284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x526140) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.733288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.060 [2024-07-12 17:30:29.733292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.733296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.060 [2024-07-12 17:30:29.733302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.733305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.733309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.733314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.733325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.737388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.737395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.737398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.737407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.737419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.737434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.737594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.737600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.737603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.737610] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:11.060 [2024-07-12 17:30:29.737616] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:11.060 [2024-07-12 17:30:29.737624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.737637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.737646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.737743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.737749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.737752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.737763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.737775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.737784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.737859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.737864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.737867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.737878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.737885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.737891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.737900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.737994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.738002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.738014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.738026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.738035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.738146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.738155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.738167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.738180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.738189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.738298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738304] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.738307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.738318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.738330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.738339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.738414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.738423] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.738434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.738446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.738456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.738549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.060 [2024-07-12 17:30:29.738558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.060 [2024-07-12 17:30:29.738568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.060 [2024-07-12 17:30:29.738575] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.060 [2024-07-12 17:30:29.738580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.060 [2024-07-12 17:30:29.738589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.060 [2024-07-12 17:30:29.738701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.060 [2024-07-12 17:30:29.738706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.738709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.738722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.738734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.738744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.738851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.738857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.738859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.738871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.738883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.738892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.738962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.738967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.738970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.738981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.738988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.738993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739447] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739657] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.739852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.739961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.739967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.739970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.739981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.739988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.739993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.740916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.740922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.740925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.740936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.740942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.061 [2024-07-12 17:30:29.740948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.061 [2024-07-12 17:30:29.740958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.061 [2024-07-12 17:30:29.744385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.061 [2024-07-12 17:30:29.744395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.061 [2024-07-12 17:30:29.744398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.744401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.061 [2024-07-12 17:30:29.744413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.744416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.061 [2024-07-12 17:30:29.744420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4a2ec0) 00:22:11.062 [2024-07-12 17:30:29.744427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.062 [2024-07-12 17:30:29.744438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5262c0, cid 3, qid 0 00:22:11.062 [2024-07-12 17:30:29.744572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.062 [2024-07-12 17:30:29.744578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.062 [2024-07-12 17:30:29.744581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.062 [2024-07-12 17:30:29.744584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5262c0) on tqpair=0x4a2ec0 00:22:11.062 [2024-07-12 17:30:29.744590] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:11.062 0% 00:22:11.062 Data Units Read: 0 00:22:11.062 Data Units Written: 0 00:22:11.062 Host Read Commands: 0 00:22:11.062 Host Write Commands: 0 00:22:11.062 Controller Busy Time: 0 minutes 00:22:11.062 Power Cycles: 0 00:22:11.062 Power On Hours: 0 hours 00:22:11.062 Unsafe Shutdowns: 0 00:22:11.062 Unrecoverable Media Errors: 0 00:22:11.062 Lifetime Error Log Entries: 0 00:22:11.062 Warning Temperature Time: 0 minutes 00:22:11.062 Critical Temperature Time: 0 minutes 00:22:11.062 00:22:11.062 Number of Queues 00:22:11.062 ================ 00:22:11.062 Number of I/O Submission Queues: 127 00:22:11.062 Number of I/O Completion Queues: 127 00:22:11.062 00:22:11.062 Active Namespaces 00:22:11.062 ================= 00:22:11.062 Namespace ID:1 00:22:11.062 Error Recovery Timeout: Unlimited 00:22:11.062 Command Set Identifier: NVM (00h) 00:22:11.062 Deallocate: Supported 00:22:11.062 Deallocated/Unwritten Error: Not Supported 00:22:11.062 Deallocated Read Value: Unknown 00:22:11.062 Deallocate in Write Zeroes: Not Supported 00:22:11.062 Deallocated Guard Field: 0xFFFF 00:22:11.062 Flush: Supported 00:22:11.062 Reservation: Supported 00:22:11.062 Namespace Sharing Capabilities: Multiple Controllers 00:22:11.062 Size (in LBAs): 131072 (0GiB) 00:22:11.062 Capacity (in LBAs): 131072 (0GiB) 00:22:11.062 Utilization (in LBAs): 131072 (0GiB) 00:22:11.062 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:11.062 EUI64: ABCDEF0123456789 00:22:11.062 UUID: 9ab51626-c8e0-4fc1-90f4-0b8914552041 00:22:11.062 Thin Provisioning: Not Supported 00:22:11.062 Per-NS Atomic Units: Yes 00:22:11.062 Atomic Boundary Size (Normal): 0 00:22:11.062 Atomic Boundary Size (PFail): 0 00:22:11.062 Atomic Boundary Offset: 0 00:22:11.062 Maximum Single Source Range Length: 65535 00:22:11.062 Maximum Copy Length: 65535 00:22:11.062 Maximum Source Range Count: 1 00:22:11.062 NGUID/EUI64 Never Reused: No 00:22:11.062 Namespace Write Protected: No 00:22:11.062 Number of LBA Formats: 1 00:22:11.062 Current LBA Format: LBA Format #00 00:22:11.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.062 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.062 rmmod nvme_tcp 00:22:11.062 rmmod nvme_fabrics 00:22:11.062 rmmod nvme_keyring 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4147204 ']' 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4147204 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 4147204 ']' 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 4147204 00:22:11.062 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4147204 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4147204' 00:22:11.320 killing process with pid 4147204 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 4147204 00:22:11.320 17:30:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 4147204 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.320 17:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.850 17:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.850 00:22:13.850 real 0m8.645s 00:22:13.850 user 0m6.899s 00:22:13.850 sys 0m4.115s 00:22:13.850 17:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.850 17:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:13.850 ************************************ 00:22:13.850 END TEST nvmf_identify 00:22:13.850 ************************************ 00:22:13.850 17:30:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:13.850 17:30:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:13.850 17:30:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.850 17:30:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.850 17:30:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.850 ************************************ 00:22:13.850 START TEST nvmf_perf 00:22:13.850 ************************************ 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:13.850 * Looking for test storage... 00:22:13.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.850 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.851 17:30:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.114 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.114 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.114 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.114 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:22:19.114 00:22:19.114 --- 10.0.0.2 ping statistics --- 00:22:19.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.114 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:19.114 00:22:19.114 --- 10.0.0.1 ping statistics --- 00:22:19.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.114 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4150747 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4150747 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 4150747 ']' 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.114 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.115 17:30:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.115 [2024-07-12 17:30:37.441370] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:19.115 [2024-07-12 17:30:37.441426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.115 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.115 [2024-07-12 17:30:37.499960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.115 [2024-07-12 17:30:37.572258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.115 [2024-07-12 17:30:37.572299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.115 [2024-07-12 17:30:37.572308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.115 [2024-07-12 17:30:37.572315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.115 [2024-07-12 17:30:37.572319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.115 [2024-07-12 17:30:37.572412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.115 [2024-07-12 17:30:37.572461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.115 [2024-07-12 17:30:37.572553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.115 [2024-07-12 17:30:37.572554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:19.681 17:30:38 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:22.966 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:23.224 [2024-07-12 17:30:41.844206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.224 17:30:41 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.482 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:23.482 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.482 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:23.482 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:23.740 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.998 [2024-07-12 17:30:42.591047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.999 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:24.256 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:24.256 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:24.256 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:24.256 17:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:25.261 Initializing NVMe Controllers 00:22:25.261 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:25.261 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:25.261 Initialization complete. Launching workers. 00:22:25.261 ======================================================== 00:22:25.261 Latency(us) 00:22:25.261 Device Information : IOPS MiB/s Average min max 00:22:25.261 PCIE (0000:5e:00.0) NSID 1 from core 0: 97662.12 381.49 327.26 10.62 6200.86 00:22:25.261 ======================================================== 00:22:25.261 Total : 97662.12 381.49 327.26 10.62 6200.86 00:22:25.261 00:22:25.261 17:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:25.519 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.453 Initializing NVMe Controllers 00:22:26.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.453 Initialization complete. Launching workers. 00:22:26.453 ======================================================== 00:22:26.453 Latency(us) 00:22:26.453 Device Information : IOPS MiB/s Average min max 00:22:26.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.70 0.33 12113.41 126.40 45549.30 00:22:26.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.76 0.26 15096.72 4987.27 49874.15 00:22:26.454 ======================================================== 00:22:26.454 Total : 151.46 0.59 13428.42 126.40 49874.15 00:22:26.454 00:22:26.711 17:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.711 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.086 Initializing NVMe Controllers 00:22:28.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.086 Initialization complete. Launching workers. 00:22:28.086 ======================================================== 00:22:28.086 Latency(us) 00:22:28.086 Device Information : IOPS MiB/s Average min max 00:22:28.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11051.78 43.17 2897.19 428.99 6220.46 00:22:28.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3922.57 15.32 8200.60 5328.83 17319.91 00:22:28.086 ======================================================== 00:22:28.086 Total : 14974.35 58.49 4286.44 428.99 17319.91 00:22:28.086 00:22:28.086 17:30:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:28.086 17:30:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:28.086 17:30:46 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.086 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.615 Initializing NVMe Controllers 00:22:30.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.615 Controller IO queue size 128, less than required. 00:22:30.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.615 Controller IO queue size 128, less than required. 00:22:30.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.615 Initialization complete. Launching workers. 00:22:30.615 ======================================================== 00:22:30.615 Latency(us) 00:22:30.615 Device Information : IOPS MiB/s Average min max 00:22:30.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1693.40 423.35 76848.43 53786.54 114197.14 00:22:30.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 624.96 156.24 217334.42 56973.59 331054.63 00:22:30.615 ======================================================== 00:22:30.615 Total : 2318.36 579.59 114719.36 53786.54 331054.63 00:22:30.615 00:22:30.615 17:30:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:30.615 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.615 No valid NVMe controllers or AIO or URING devices found 00:22:30.615 Initializing NVMe Controllers 00:22:30.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.615 Controller IO queue size 128, less than required. 00:22:30.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:30.615 Controller IO queue size 128, less than required. 00:22:30.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.615 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:30.615 WARNING: Some requested NVMe devices were skipped 00:22:30.615 17:30:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:30.615 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.147 Initializing NVMe Controllers 00:22:33.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.147 Controller IO queue size 128, less than required. 00:22:33.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.147 Controller IO queue size 128, less than required. 00:22:33.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:33.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:33.147 Initialization complete. Launching workers. 00:22:33.147 00:22:33.147 ==================== 00:22:33.147 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:33.147 TCP transport: 00:22:33.147 polls: 22691 00:22:33.147 idle_polls: 12476 00:22:33.147 sock_completions: 10215 00:22:33.147 nvme_completions: 5529 00:22:33.147 submitted_requests: 8372 00:22:33.147 queued_requests: 1 00:22:33.147 00:22:33.147 ==================== 00:22:33.147 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:33.147 TCP transport: 00:22:33.147 polls: 19788 00:22:33.147 idle_polls: 9347 00:22:33.147 sock_completions: 10441 00:22:33.147 nvme_completions: 6885 00:22:33.147 submitted_requests: 10296 00:22:33.147 queued_requests: 1 00:22:33.147 ======================================================== 00:22:33.147 Latency(us) 00:22:33.147 Device Information : IOPS MiB/s Average min max 00:22:33.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1381.90 345.47 94177.70 49181.80 140010.74 00:22:33.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1720.87 430.22 75596.54 33006.59 117493.40 00:22:33.147 ======================================================== 00:22:33.147 Total : 3102.77 775.69 83872.13 33006.59 140010.74 00:22:33.147 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.147 rmmod nvme_tcp 00:22:33.147 rmmod nvme_fabrics 00:22:33.147 rmmod nvme_keyring 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4150747 ']' 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4150747 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 4150747 ']' 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 4150747 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:33.147 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4150747 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4150747' 00:22:33.406 killing process with pid 4150747 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 4150747 00:22:33.406 17:30:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 4150747 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.781 17:30:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.313 17:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.313 00:22:37.313 real 0m23.351s 00:22:37.313 user 1m3.400s 00:22:37.313 sys 0m7.008s 00:22:37.313 17:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.313 17:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.313 ************************************ 00:22:37.313 END TEST nvmf_perf 00:22:37.313 ************************************ 00:22:37.313 17:30:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.313 17:30:55 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:37.313 17:30:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.313 17:30:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.313 17:30:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.313 ************************************ 00:22:37.313 START TEST nvmf_fio_host 00:22:37.313 ************************************ 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:37.313 * Looking for test storage... 00:22:37.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.313 17:30:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:42.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:42.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:42.583 Found net devices under 0000:86:00.0: cvl_0_0 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.583 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:42.584 Found net devices under 0000:86:00.1: cvl_0_1 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:42.584 00:22:42.584 --- 10.0.0.2 ping statistics --- 00:22:42.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.584 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:22:42.584 00:22:42.584 --- 10.0.0.1 ping statistics --- 00:22:42.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.584 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4156634 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4156634 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 4156634 ']' 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.584 17:31:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.584 [2024-07-12 17:31:00.624041] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:42.584 [2024-07-12 17:31:00.624083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.584 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.584 [2024-07-12 17:31:00.680960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.584 [2024-07-12 17:31:00.761543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.584 [2024-07-12 17:31:00.761581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.584 [2024-07-12 17:31:00.761588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.584 [2024-07-12 17:31:00.761594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.584 [2024-07-12 17:31:00.761599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.584 [2024-07-12 17:31:00.761658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.584 [2024-07-12 17:31:00.761751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.584 [2024-07-12 17:31:00.761834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.584 [2024-07-12 17:31:00.761836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:42.843 [2024-07-12 17:31:01.583840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.843 17:31:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.101 17:31:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:43.101 Malloc1 00:22:43.101 17:31:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.359 17:31:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:43.617 17:31:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.617 [2024-07-12 17:31:02.381994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:43.876 17:31:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.134 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:44.134 fio-3.35 00:22:44.134 Starting 1 thread 00:22:44.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.924 00:22:46.924 test: (groupid=0, jobs=1): err= 0: pid=4157222: Fri Jul 12 17:31:05 2024 00:22:46.924 read: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec) 00:22:46.924 slat (nsec): min=1603, max=244385, avg=1740.66, stdev=2238.56 00:22:46.924 clat (usec): min=3163, max=10675, avg=5997.09, stdev=449.44 00:22:46.924 lat (usec): min=3198, max=10676, avg=5998.83, stdev=449.38 00:22:46.924 clat percentiles (usec): 00:22:46.924 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:46.924 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:22:46.924 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:22:46.924 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8979], 99.95th=[10159], 00:22:46.924 | 99.99th=[10683] 00:22:46.924 bw ( KiB/s): min=46288, max=47936, per=99.97%, avg=47206.00, stdev=703.13, samples=4 00:22:46.925 iops : min=11572, max=11984, avg=11801.50, stdev=175.78, samples=4 00:22:46.925 write: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec); 0 zone resets 00:22:46.925 slat (nsec): min=1655, max=228305, avg=1832.11, stdev=1665.12 00:22:46.925 clat (usec): min=2466, max=9495, avg=4827.16, stdev=367.43 00:22:46.925 lat (usec): min=2481, max=9497, avg=4828.99, stdev=367.43 00:22:46.925 clat percentiles (usec): 00:22:46.925 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:46.925 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:22:46.925 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:46.925 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 7373], 99.95th=[ 8291], 00:22:46.925 | 99.99th=[ 8979] 00:22:46.925 bw ( KiB/s): min=46752, max=47424, per=99.99%, avg=46988.00, stdev=302.21, samples=4 00:22:46.925 iops : min=11688, max=11856, avg=11747.00, stdev=75.55, samples=4 00:22:46.925 lat (msec) : 4=0.56%, 10=99.41%, 20=0.03% 00:22:46.925 cpu : usr=72.70%, sys=25.10%, ctx=99, majf=0, minf=6 00:22:46.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:46.925 issued rwts: total=23670,23554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:46.925 00:22:46.925 Run status group 0 (all jobs): 00:22:46.925 READ: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec 00:22:46.925 WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:46.925 17:31:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:46.925 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:46.925 fio-3.35 00:22:46.925 Starting 1 thread 00:22:46.925 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.460 00:22:49.460 test: (groupid=0, jobs=1): err= 0: pid=4157796: Fri Jul 12 17:31:07 2024 00:22:49.460 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(340MiB/2006msec) 00:22:49.460 slat (nsec): min=2585, max=90981, avg=2862.24, stdev=1216.81 00:22:49.460 clat (usec): min=1604, max=13698, avg=6848.46, stdev=1621.39 00:22:49.460 lat (usec): min=1607, max=13700, avg=6851.32, stdev=1621.46 00:22:49.460 clat percentiles (usec): 00:22:49.460 | 1.00th=[ 3687], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5473], 00:22:49.460 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7308], 00:22:49.460 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[ 9503], 00:22:49.460 | 99.00th=[11207], 99.50th=[11994], 99.90th=[13435], 99.95th=[13566], 00:22:49.460 | 99.99th=[13698] 00:22:49.460 bw ( KiB/s): min=82240, max=93728, per=50.47%, avg=87688.00, stdev=4800.74, samples=4 00:22:49.460 iops : min= 5140, max= 5858, avg=5480.50, stdev=300.05, samples=4 00:22:49.460 write: IOPS=6343, BW=99.1MiB/s (104MB/s)(180MiB/1812msec); 0 zone resets 00:22:49.460 slat (usec): min=30, max=255, avg=32.08, stdev= 5.02 00:22:49.460 clat (usec): min=2336, max=13609, avg=8543.76, stdev=1495.83 00:22:49.460 lat (usec): min=2366, max=13643, avg=8575.84, stdev=1496.35 00:22:49.460 clat percentiles (usec): 00:22:49.460 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7308], 00:22:49.460 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717], 00:22:49.460 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11207], 00:22:49.460 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13304], 99.95th=[13304], 00:22:49.460 | 99.99th=[13566] 00:22:49.460 bw ( KiB/s): min=86752, max=97760, per=90.10%, avg=91456.00, stdev=4749.46, samples=4 00:22:49.460 iops : min= 5422, max= 6110, avg=5716.00, stdev=296.84, samples=4 00:22:49.460 lat (msec) : 2=0.02%, 4=1.85%, 10=89.84%, 20=8.29% 00:22:49.460 cpu : usr=84.49%, sys=14.01%, ctx=89, majf=0, minf=3 00:22:49.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:49.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:49.460 issued rwts: total=21784,11495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:49.460 00:22:49.460 Run status group 0 (all jobs): 00:22:49.460 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2006-2006msec 00:22:49.460 WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=180MiB (188MB), run=1812-1812msec 00:22:49.460 17:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.460 rmmod nvme_tcp 00:22:49.460 rmmod nvme_fabrics 00:22:49.460 rmmod nvme_keyring 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4156634 ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4156634 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 4156634 ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 4156634 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4156634 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4156634' 00:22:49.460 killing process with pid 4156634 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 4156634 00:22:49.460 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 4156634 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.719 17:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.255 17:31:10 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.255 00:22:52.255 real 0m14.863s 00:22:52.255 user 0m47.263s 00:22:52.255 sys 0m5.651s 00:22:52.255 17:31:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:52.255 17:31:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.255 ************************************ 00:22:52.255 END TEST nvmf_fio_host 00:22:52.255 ************************************ 00:22:52.255 17:31:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:52.255 17:31:10 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.255 17:31:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:52.255 17:31:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:52.255 17:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.255 ************************************ 00:22:52.255 START TEST nvmf_failover 00:22:52.255 ************************************ 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.255 * Looking for test storage... 00:22:52.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.255 17:31:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.256 17:31:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.636 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.637 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.637 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.637 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.637 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:22:57.637 00:22:57.637 --- 10.0.0.2 ping statistics --- 00:22:57.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.637 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:22:57.637 00:22:57.637 --- 10.0.0.1 ping statistics --- 00:22:57.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.637 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:57.637 17:31:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4161542 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4161542 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4161542 ']' 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.637 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.637 [2024-07-12 17:31:16.079095] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:22:57.637 [2024-07-12 17:31:16.079134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.637 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.637 [2024-07-12 17:31:16.135605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:57.637 [2024-07-12 17:31:16.213843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.638 [2024-07-12 17:31:16.213881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.638 [2024-07-12 17:31:16.213888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.638 [2024-07-12 17:31:16.213894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.638 [2024-07-12 17:31:16.213899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.638 [2024-07-12 17:31:16.214001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.638 [2024-07-12 17:31:16.214101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.638 [2024-07-12 17:31:16.214102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.205 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.205 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:58.205 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.205 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.206 17:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:58.206 17:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.206 17:31:16 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:58.464 [2024-07-12 17:31:17.069667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.464 17:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:58.723 Malloc0 00:22:58.723 17:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.723 17:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.982 17:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:59.241 [2024-07-12 17:31:17.810494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.241 17:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.241 [2024-07-12 17:31:18.003042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:59.500 [2024-07-12 17:31:18.191699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4162019 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4162019 /var/tmp/bdevperf.sock 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4162019 ']' 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.500 17:31:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.436 17:31:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.436 17:31:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:00.436 17:31:19 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.003 NVMe0n1 00:23:01.003 17:31:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.262 00:23:01.262 17:31:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4162259 00:23:01.262 17:31:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.262 17:31:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:02.197 17:31:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.456 [2024-07-12 17:31:21.084440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.456 [2024-07-12 17:31:21.084570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 [2024-07-12 17:31:21.084821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214080 is same with the state(5) to be set 00:23:02.457 17:31:21 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:05.740 17:31:24 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.740 00:23:05.740 17:31:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.999 [2024-07-12 17:31:24.602814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:05.999 [2024-07-12 17:31:24.602905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.602994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 [2024-07-12 17:31:24.603090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2214f20 is same with the state(5) to be set 00:23:06.000 17:31:24 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:09.284 17:31:27 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.284 [2024-07-12 17:31:27.805089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.284 17:31:27 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:10.219 17:31:28 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:10.479 [2024-07-12 17:31:29.003521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.479 [2024-07-12 17:31:29.003972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.003977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.003983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.003989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.003994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 [2024-07-12 17:31:29.004056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215aa0 is same with the state(5) to be set 00:23:10.480 17:31:29 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4162259 00:23:17.049 0 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4162019 ']' 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4162019' 00:23:17.049 killing process with pid 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4162019 00:23:17.049 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.049 [2024-07-12 17:31:18.262572] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:23:17.049 [2024-07-12 17:31:18.262624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162019 ] 00:23:17.049 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.049 [2024-07-12 17:31:18.317279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.049 [2024-07-12 17:31:18.392333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.049 Running I/O for 15 seconds... 00:23:17.049 [2024-07-12 17:31:21.085037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.049 [2024-07-12 17:31:21.085203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.049 [2024-07-12 17:31:21.085211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.050 [2024-07-12 17:31:21.085828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.050 [2024-07-12 17:31:21.085835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.085992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.085998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.051 [2024-07-12 17:31:21.086446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.051 [2024-07-12 17:31:21.086452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.052 [2024-07-12 17:31:21.086467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.052 [2024-07-12 17:31:21.086699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.052 [2024-07-12 17:31:21.086927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.086946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.052 [2024-07-12 17:31:21.086953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.052 [2024-07-12 17:31:21.086959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:23:17.052 [2024-07-12 17:31:21.086965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.087009] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a15300 was disconnected and freed. reset controller. 00:23:17.052 [2024-07-12 17:31:21.087017] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:17.052 [2024-07-12 17:31:21.087037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.052 [2024-07-12 17:31:21.087044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.087051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.052 [2024-07-12 17:31:21.087058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.087065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.052 [2024-07-12 17:31:21.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.087078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.052 [2024-07-12 17:31:21.087084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:21.087090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.052 [2024-07-12 17:31:21.087125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7540 (9): Bad file descriptor 00:23:17.052 [2024-07-12 17:31:21.089966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.052 [2024-07-12 17:31:21.160298] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.052 [2024-07-12 17:31:24.604113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.052 [2024-07-12 17:31:24.604150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.052 [2024-07-12 17:31:24.604165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.053 [2024-07-12 17:31:24.604666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.053 [2024-07-12 17:31:24.604680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.053 [2024-07-12 17:31:24.604690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.604987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.604995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.054 [2024-07-12 17:31:24.605072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.054 [2024-07-12 17:31:24.605087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.054 [2024-07-12 17:31:24.605301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.054 [2024-07-12 17:31:24.605307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.055 [2024-07-12 17:31:24.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31840 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31848 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31856 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31864 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31872 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31880 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31888 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31896 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31904 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31912 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31920 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31928 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31936 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31944 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.055 [2024-07-12 17:31:24.605917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31952 len:8 PRP1 0x0 PRP2 0x0 00:23:17.055 [2024-07-12 17:31:24.605924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.055 [2024-07-12 17:31:24.605931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.055 [2024-07-12 17:31:24.605936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.605941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31960 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.605947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.605954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.605959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.605966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31968 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.605973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.605979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.605984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.605989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31976 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.605996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.606019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.606066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.606089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.606112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.606119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.606124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.606129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.056 [2024-07-12 17:31:24.616484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.056 [2024-07-12 17:31:24.616491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:23:17.056 [2024-07-12 17:31:24.616500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616546] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bc2380 was disconnected and freed. reset controller. 00:23:17.056 [2024-07-12 17:31:24.616556] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:17.056 [2024-07-12 17:31:24.616582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.056 [2024-07-12 17:31:24.616592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.056 [2024-07-12 17:31:24.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.056 [2024-07-12 17:31:24.616630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.056 [2024-07-12 17:31:24.616648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:24.616657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.056 [2024-07-12 17:31:24.616695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7540 (9): Bad file descriptor 00:23:17.056 [2024-07-12 17:31:24.620560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.056 [2024-07-12 17:31:24.655418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.056 [2024-07-12 17:31:29.005530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.056 [2024-07-12 17:31:29.005670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.056 [2024-07-12 17:31:29.005676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.005917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.005934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.005948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.005962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.005976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.005990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.005998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.057 [2024-07-12 17:31:29.006156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.057 [2024-07-12 17:31:29.006265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.057 [2024-07-12 17:31:29.006271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.058 [2024-07-12 17:31:29.006872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.058 [2024-07-12 17:31:29.006882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.006981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.059 [2024-07-12 17:31:29.006987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37408 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37416 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37424 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37432 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37440 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37448 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37456 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37464 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37472 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37480 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37488 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37496 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37504 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37512 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37520 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37528 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37536 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37544 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37552 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37560 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.059 [2024-07-12 17:31:29.007510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.059 [2024-07-12 17:31:29.007515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.059 [2024-07-12 17:31:29.007520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37568 len:8 PRP1 0x0 PRP2 0x0 00:23:17.059 [2024-07-12 17:31:29.007526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.007533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.007537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.007542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37576 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.007549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.007556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.007563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.017892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37584 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.017906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.017917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.017923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.017931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37592 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.017939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.017948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.017955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.017962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37600 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.017970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.017980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.017987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.017995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37608 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37616 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37624 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37632 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37640 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37648 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37656 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.060 [2024-07-12 17:31:29.018207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.060 [2024-07-12 17:31:29.018214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37664 len:8 PRP1 0x0 PRP2 0x0 00:23:17.060 [2024-07-12 17:31:29.018223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018271] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bc2170 was disconnected and freed. reset controller. 00:23:17.060 [2024-07-12 17:31:29.018281] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:17.060 [2024-07-12 17:31:29.018307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.060 [2024-07-12 17:31:29.018317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.060 [2024-07-12 17:31:29.018341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.060 [2024-07-12 17:31:29.018359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.060 [2024-07-12 17:31:29.018382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.060 [2024-07-12 17:31:29.018392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.060 [2024-07-12 17:31:29.018419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f7540 (9): Bad file descriptor 00:23:17.060 [2024-07-12 17:31:29.022283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.060 [2024-07-12 17:31:29.094220] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:17.060 00:23:17.060 Latency(us) 00:23:17.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.060 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:17.060 Verification LBA range: start 0x0 length 0x4000 00:23:17.060 NVMe0n1 : 15.01 10912.00 42.63 511.53 0.00 11182.61 441.66 20743.57 00:23:17.060 =================================================================================================================== 00:23:17.060 Total : 10912.00 42.63 511.53 0.00 11182.61 441.66 20743.57 00:23:17.060 Received shutdown signal, test time was about 15.000000 seconds 00:23:17.060 00:23:17.060 Latency(us) 00:23:17.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.060 =================================================================================================================== 00:23:17.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4164778 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4164778 /var/tmp/bdevperf.sock 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 4164778 ']' 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.060 17:31:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.627 17:31:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.627 17:31:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:17.627 17:31:36 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:17.627 [2024-07-12 17:31:36.318322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.627 17:31:36 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:17.886 [2024-07-12 17:31:36.510865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:17.886 17:31:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.145 NVMe0n1 00:23:18.145 17:31:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.404 00:23:18.404 17:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.971 00:23:18.971 17:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.971 17:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:18.971 17:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.229 17:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:22.514 17:31:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.514 17:31:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:22.514 17:31:41 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4165703 00:23:22.514 17:31:41 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.514 17:31:41 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4165703 00:23:23.533 0 00:23:23.533 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.533 [2024-07-12 17:31:35.341399] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:23:23.533 [2024-07-12 17:31:35.341452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4164778 ] 00:23:23.533 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.533 [2024-07-12 17:31:35.395548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.533 [2024-07-12 17:31:35.464776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.533 [2024-07-12 17:31:37.893405] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:23.533 [2024-07-12 17:31:37.893449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.533 [2024-07-12 17:31:37.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.533 [2024-07-12 17:31:37.893469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.533 [2024-07-12 17:31:37.893476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.533 [2024-07-12 17:31:37.893483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.533 [2024-07-12 17:31:37.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.533 [2024-07-12 17:31:37.893496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.533 [2024-07-12 17:31:37.893502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.533 [2024-07-12 17:31:37.893508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:23.533 [2024-07-12 17:31:37.893533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:23.533 [2024-07-12 17:31:37.893546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1712540 (9): Bad file descriptor 00:23:23.533 [2024-07-12 17:31:37.955741] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.533 Running I/O for 1 seconds... 00:23:23.533 00:23:23.533 Latency(us) 00:23:23.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.533 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:23.533 Verification LBA range: start 0x0 length 0x4000 00:23:23.533 NVMe0n1 : 1.00 10898.37 42.57 0.00 0.00 11701.81 2179.78 11511.54 00:23:23.533 =================================================================================================================== 00:23:23.533 Total : 10898.37 42.57 0.00 0.00 11701.81 2179.78 11511.54 00:23:23.533 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.533 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:23.807 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.066 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.066 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:24.066 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.327 17:31:42 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:27.614 17:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.614 17:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4164778 ']' 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164778' 00:23:27.614 killing process with pid 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4164778 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:27.614 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.872 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:27.872 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.872 17:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:27.872 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.873 rmmod nvme_tcp 00:23:27.873 rmmod nvme_fabrics 00:23:27.873 rmmod nvme_keyring 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4161542 ']' 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4161542 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 4161542 ']' 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 4161542 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.873 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161542 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161542' 00:23:28.132 killing process with pid 4161542 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 4161542 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 4161542 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.132 17:31:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.670 17:31:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.670 00:23:30.670 real 0m38.364s 00:23:30.670 user 2m4.333s 00:23:30.670 sys 0m7.234s 00:23:30.670 17:31:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:30.670 17:31:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.670 ************************************ 00:23:30.670 END TEST nvmf_failover 00:23:30.670 ************************************ 00:23:30.670 17:31:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:30.670 17:31:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:30.670 17:31:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:30.670 17:31:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.670 17:31:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.670 ************************************ 00:23:30.670 START TEST nvmf_host_discovery 00:23:30.670 ************************************ 00:23:30.670 17:31:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:30.670 * Looking for test storage... 00:23:30.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.670 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.671 17:31:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:35.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:35.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:35.944 Found net devices under 0000:86:00.0: cvl_0_0 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:35.944 Found net devices under 0000:86:00.1: cvl_0_1 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.944 17:31:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:35.944 00:23:35.944 --- 10.0.0.2 ping statistics --- 00:23:35.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.944 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:23:35.944 00:23:35.944 --- 10.0.0.1 ping statistics --- 00:23:35.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.944 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4169921 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4169921 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4169921 ']' 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.944 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.945 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.945 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.945 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.945 [2024-07-12 17:31:54.197259] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:23:35.945 [2024-07-12 17:31:54.197302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.945 [2024-07-12 17:31:54.251735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.945 [2024-07-12 17:31:54.322804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.945 [2024-07-12 17:31:54.322844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.945 [2024-07-12 17:31:54.322850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.945 [2024-07-12 17:31:54.322856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.945 [2024-07-12 17:31:54.322860] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.945 [2024-07-12 17:31:54.322895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.513 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.513 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:36.513 17:31:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.513 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.513 17:31:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 [2024-07-12 17:31:55.036434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 [2024-07-12 17:31:55.048726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 null0 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 null1 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4170166 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4170166 /tmp/host.sock 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 4170166 ']' 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:36.513 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.513 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.513 [2024-07-12 17:31:55.124051] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:23:36.513 [2024-07-12 17:31:55.124092] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170166 ] 00:23:36.513 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.513 [2024-07-12 17:31:55.178208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.513 [2024-07-12 17:31:55.257144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.452 17:31:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:37.452 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.453 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 [2024-07-12 17:31:56.271798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:37.712 17:31:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:38.280 [2024-07-12 17:31:56.996957] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.280 [2024-07-12 17:31:56.996976] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.280 [2024-07-12 17:31:56.996988] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.539 [2024-07-12 17:31:57.083245] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:38.539 [2024-07-12 17:31:57.187176] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.539 [2024-07-12 17:31:57.187194] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.797 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.056 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 [2024-07-12 17:31:57.755905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.057 [2024-07-12 17:31:57.756413] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:39.057 [2024-07-12 17:31:57.756434] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.057 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.316 [2024-07-12 17:31:57.843682] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.316 [2024-07-12 17:31:57.902199] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:39.316 [2024-07-12 17:31:57.902214] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.316 [2024-07-12 17:31:57.902220] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:39.316 17:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:40.252 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.253 [2024-07-12 17:31:58.995432] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:40.253 [2024-07-12 17:31:58.995452] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.253 [2024-07-12 17:31:58.999473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.253 [2024-07-12 17:31:58.999490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.253 [2024-07-12 17:31:58.999498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.253 [2024-07-12 17:31:58.999505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.253 [2024-07-12 17:31:58.999512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.253 [2024-07-12 17:31:58.999519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.253 [2024-07-12 17:31:58.999526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.253 [2024-07-12 17:31:58.999533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.253 [2024-07-12 17:31:58.999540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.253 17:31:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.253 [2024-07-12 17:31:59.009487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.253 [2024-07-12 17:31:59.019528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.253 [2024-07-12 17:31:59.019701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.253 [2024-07-12 17:31:59.019715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.253 [2024-07-12 17:31:59.019723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.253 [2024-07-12 17:31:59.019733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.253 [2024-07-12 17:31:59.019743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.253 [2024-07-12 17:31:59.019750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.253 [2024-07-12 17:31:59.019758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.253 [2024-07-12 17:31:59.019768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.253 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.253 [2024-07-12 17:31:59.029584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.253 [2024-07-12 17:31:59.029717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.253 [2024-07-12 17:31:59.029728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.253 [2024-07-12 17:31:59.029735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.253 [2024-07-12 17:31:59.029744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.253 [2024-07-12 17:31:59.029753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.253 [2024-07-12 17:31:59.029759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.253 [2024-07-12 17:31:59.029765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.253 [2024-07-12 17:31:59.029774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.513 [2024-07-12 17:31:59.039634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.513 [2024-07-12 17:31:59.039819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.513 [2024-07-12 17:31:59.039832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.513 [2024-07-12 17:31:59.039840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.513 [2024-07-12 17:31:59.039850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.513 [2024-07-12 17:31:59.039859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.513 [2024-07-12 17:31:59.039865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.513 [2024-07-12 17:31:59.039872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.513 [2024-07-12 17:31:59.039881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.513 [2024-07-12 17:31:59.049688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.513 [2024-07-12 17:31:59.049808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.513 [2024-07-12 17:31:59.049823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.513 [2024-07-12 17:31:59.049830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.513 [2024-07-12 17:31:59.049839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.513 [2024-07-12 17:31:59.049848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.513 [2024-07-12 17:31:59.049854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.513 [2024-07-12 17:31:59.049860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.513 [2024-07-12 17:31:59.049869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.513 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.513 [2024-07-12 17:31:59.059737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.513 [2024-07-12 17:31:59.059864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.513 [2024-07-12 17:31:59.059875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.513 [2024-07-12 17:31:59.059882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.513 [2024-07-12 17:31:59.059892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.513 [2024-07-12 17:31:59.059900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.513 [2024-07-12 17:31:59.059907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.513 [2024-07-12 17:31:59.059913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.513 [2024-07-12 17:31:59.059922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.513 [2024-07-12 17:31:59.069789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.513 [2024-07-12 17:31:59.069968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.513 [2024-07-12 17:31:59.069980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.514 [2024-07-12 17:31:59.069987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.514 [2024-07-12 17:31:59.070001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.514 [2024-07-12 17:31:59.070010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.514 [2024-07-12 17:31:59.070016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.514 [2024-07-12 17:31:59.070023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.514 [2024-07-12 17:31:59.070032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.514 [2024-07-12 17:31:59.079841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.514 [2024-07-12 17:31:59.080025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.514 [2024-07-12 17:31:59.080036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1041f10 with addr=10.0.0.2, port=4420 00:23:40.514 [2024-07-12 17:31:59.080043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041f10 is same with the state(5) to be set 00:23:40.514 [2024-07-12 17:31:59.080052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1041f10 (9): Bad file descriptor 00:23:40.514 [2024-07-12 17:31:59.080061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:40.514 [2024-07-12 17:31:59.080067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:40.514 [2024-07-12 17:31:59.080073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:40.514 [2024-07-12 17:31:59.080082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:40.514 [2024-07-12 17:31:59.082129] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:40.514 [2024-07-12 17:31:59.082144] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.514 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.774 17:31:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.710 [2024-07-12 17:32:00.424460] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:41.710 [2024-07-12 17:32:00.424481] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:41.710 [2024-07-12 17:32:00.424494] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.969 [2024-07-12 17:32:00.510751] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:41.969 [2024-07-12 17:32:00.571108] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.969 [2024-07-12 17:32:00.571136] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.969 request: 00:23:41.969 { 00:23:41.969 "name": "nvme", 00:23:41.969 "trtype": "tcp", 00:23:41.969 "traddr": "10.0.0.2", 00:23:41.969 "adrfam": "ipv4", 00:23:41.969 "trsvcid": "8009", 00:23:41.969 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:41.969 "wait_for_attach": true, 00:23:41.969 "method": "bdev_nvme_start_discovery", 00:23:41.969 "req_id": 1 00:23:41.969 } 00:23:41.969 Got JSON-RPC error response 00:23:41.969 response: 00:23:41.969 { 00:23:41.969 "code": -17, 00:23:41.969 "message": "File exists" 00:23:41.969 } 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.969 request: 00:23:41.969 { 00:23:41.969 "name": "nvme_second", 00:23:41.969 "trtype": "tcp", 00:23:41.969 "traddr": "10.0.0.2", 00:23:41.969 "adrfam": "ipv4", 00:23:41.969 "trsvcid": "8009", 00:23:41.969 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:41.969 "wait_for_attach": true, 00:23:41.969 "method": "bdev_nvme_start_discovery", 00:23:41.969 "req_id": 1 00:23:41.969 } 00:23:41.969 Got JSON-RPC error response 00:23:41.969 response: 00:23:41.969 { 00:23:41.969 "code": -17, 00:23:41.969 "message": "File exists" 00:23:41.969 } 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.969 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:41.970 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.229 17:32:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 [2024-07-12 17:32:01.811353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.162 [2024-07-12 17:32:01.811386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ea00 with addr=10.0.0.2, port=8010 00:23:43.162 [2024-07-12 17:32:01.811416] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:43.162 [2024-07-12 17:32:01.811423] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:43.162 [2024-07-12 17:32:01.811431] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:44.093 [2024-07-12 17:32:02.813852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.093 [2024-07-12 17:32:02.813874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x107ea00 with addr=10.0.0.2, port=8010 00:23:44.093 [2024-07-12 17:32:02.813885] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:44.093 [2024-07-12 17:32:02.813891] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:44.093 [2024-07-12 17:32:02.813897] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:45.469 [2024-07-12 17:32:03.815995] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:45.469 request: 00:23:45.469 { 00:23:45.469 "name": "nvme_second", 00:23:45.469 "trtype": "tcp", 00:23:45.469 "traddr": "10.0.0.2", 00:23:45.469 "adrfam": "ipv4", 00:23:45.469 "trsvcid": "8010", 00:23:45.469 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:45.469 "wait_for_attach": false, 00:23:45.469 "attach_timeout_ms": 3000, 00:23:45.469 "method": "bdev_nvme_start_discovery", 00:23:45.469 "req_id": 1 00:23:45.469 } 00:23:45.469 Got JSON-RPC error response 00:23:45.469 response: 00:23:45.469 { 00:23:45.469 "code": -110, 00:23:45.469 "message": "Connection timed out" 00:23:45.469 } 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4170166 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.469 rmmod nvme_tcp 00:23:45.469 rmmod nvme_fabrics 00:23:45.469 rmmod nvme_keyring 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4169921 ']' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4169921 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 4169921 ']' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 4169921 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4169921 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4169921' 00:23:45.469 killing process with pid 4169921 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 4169921 00:23:45.469 17:32:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 4169921 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.469 17:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.016 00:23:48.016 real 0m17.247s 00:23:48.016 user 0m21.795s 00:23:48.016 sys 0m5.168s 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.016 ************************************ 00:23:48.016 END TEST nvmf_host_discovery 00:23:48.016 ************************************ 00:23:48.016 17:32:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:48.016 17:32:06 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.016 17:32:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:48.016 17:32:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.016 17:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.016 ************************************ 00:23:48.016 START TEST nvmf_host_multipath_status 00:23:48.016 ************************************ 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.016 * Looking for test storage... 00:23:48.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.016 17:32:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.305 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.306 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.306 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:23:53.306 00:23:53.306 --- 10.0.0.2 ping statistics --- 00:23:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.306 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:53.306 00:23:53.306 --- 10.0.0.1 ping statistics --- 00:23:53.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.306 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4175059 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4175059 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4175059 ']' 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.306 17:32:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.306 [2024-07-12 17:32:11.647713] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:23:53.306 [2024-07-12 17:32:11.647758] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.306 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.307 [2024-07-12 17:32:11.704645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:53.307 [2024-07-12 17:32:11.784172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.307 [2024-07-12 17:32:11.784210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.307 [2024-07-12 17:32:11.784219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.307 [2024-07-12 17:32:11.784225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.307 [2024-07-12 17:32:11.784230] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.307 [2024-07-12 17:32:11.784278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.307 [2024-07-12 17:32:11.784281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4175059 00:23:53.872 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:53.872 [2024-07-12 17:32:12.640947] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.130 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:54.130 Malloc0 00:23:54.130 17:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:54.388 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.646 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.646 [2024-07-12 17:32:13.361929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.646 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.905 [2024-07-12 17:32:13.530369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4175496 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4175496 /var/tmp/bdevperf.sock 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 4175496 ']' 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.905 17:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.841 17:32:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.841 17:32:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:55.841 17:32:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:55.841 17:32:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:56.409 Nvme0n1 00:23:56.409 17:32:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:56.667 Nvme0n1 00:23:56.667 17:32:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:56.667 17:32:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:58.572 17:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:58.572 17:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:58.831 17:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:59.090 17:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:00.025 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:00.025 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.025 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.025 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.284 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.284 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:00.284 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.284 17:32:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.543 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.802 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.802 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.802 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.802 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.061 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.061 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.061 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.061 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.061 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.062 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:01.062 17:32:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.321 17:32:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.604 17:32:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:02.540 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:02.540 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.540 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.540 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.799 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.799 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.799 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.799 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.058 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.058 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.059 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.318 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.318 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.318 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.318 17:32:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.577 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.577 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.577 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.577 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.836 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.836 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:03.836 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.836 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:04.094 17:32:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:05.028 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:05.028 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.028 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.028 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.287 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.287 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.287 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.287 17:32:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.545 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.804 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.063 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.063 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.063 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.063 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.322 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.322 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:06.322 17:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.323 17:32:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:06.581 17:32:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:07.517 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:07.517 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.517 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.517 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:07.774 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.774 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:07.774 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.774 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.033 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.033 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.033 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.033 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.291 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.291 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.291 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.291 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.291 17:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.291 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.291 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.291 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.551 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.551 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:08.551 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.551 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:08.810 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.810 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:08.810 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:08.810 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.069 17:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:10.002 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:10.002 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:10.002 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.002 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.260 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.260 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:10.260 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.260 17:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.518 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:10.775 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.775 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:10.775 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.775 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:11.033 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:11.290 17:32:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:11.548 17:32:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:12.490 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:12.490 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:12.490 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.490 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.788 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.046 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.047 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.047 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.047 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.305 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.305 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:13.305 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.305 17:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.563 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:13.822 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:13.822 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:14.081 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.338 17:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:15.269 17:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:15.269 17:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.269 17:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.269 17:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.527 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:15.784 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.784 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:15.784 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.784 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.040 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.040 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.040 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.041 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.297 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.297 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.297 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.297 17:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.297 17:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.297 17:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:16.297 17:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:16.555 17:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:16.814 17:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:17.750 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:17.750 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:17.750 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.750 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.009 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.009 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.009 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.009 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.268 17:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.268 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.268 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:18.527 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.527 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:18.527 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.527 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.785 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.785 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:18.785 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.786 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.786 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.786 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:18.786 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.044 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:19.303 17:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:20.237 17:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:20.237 17:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:20.237 17:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.237 17:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.495 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.495 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:20.495 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.495 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.753 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.012 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.012 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.012 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.012 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:21.272 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.272 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:21.272 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.272 17:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.530 17:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.530 17:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:21.530 17:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.530 17:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:21.789 17:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:22.727 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:22.727 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.727 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.727 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.986 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.986 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:22.986 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.986 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.245 17:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.245 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.245 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.504 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.504 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.504 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.504 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.763 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.763 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:23.763 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.763 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4175496 ']' 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4175496' 00:24:24.049 killing process with pid 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4175496 00:24:24.049 Connection closed with partial response: 00:24:24.049 00:24:24.049 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4175496 00:24:24.049 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:24.049 [2024-07-12 17:32:13.589217] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:24:24.049 [2024-07-12 17:32:13.589264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175496 ] 00:24:24.049 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.049 [2024-07-12 17:32:13.639028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.049 [2024-07-12 17:32:13.713199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.049 Running I/O for 90 seconds... 00:24:24.049 [2024-07-12 17:32:27.524761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.524993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.524999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.049 [2024-07-12 17:32:27.525456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.049 [2024-07-12 17:32:27.525462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.525992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.525999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.050 [2024-07-12 17:32:27.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.050 [2024-07-12 17:32:27.526805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.526984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.526996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.051 [2024-07-12 17:32:27.527913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.527986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.527992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.051 [2024-07-12 17:32:27.528128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.051 [2024-07-12 17:32:27.528140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.052 [2024-07-12 17:32:27.528146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.052 [2024-07-12 17:32:27.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.052 [2024-07-12 17:32:27.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.052 [2024-07-12 17:32:27.528205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.052 [2024-07-12 17:32:27.528243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.528955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.528962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.052 [2024-07-12 17:32:27.529308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.052 [2024-07-12 17:32:27.529324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.529639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.529646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.539712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.539719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.053 [2024-07-12 17:32:27.540413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-12 17:32:27.540419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.540609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.540986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.540998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.541300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.054 [2024-07-12 17:32:27.541321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.541340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.054 [2024-07-12 17:32:27.541371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-12 17:32:27.541382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.541739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.541745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.055 [2024-07-12 17:32:27.542934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.055 [2024-07-12 17:32:27.542941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.542953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.542960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.542973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.542979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.542992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.542999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.543985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.543992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.544010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.056 [2024-07-12 17:32:27.544029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-12 17:32:27.544050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-12 17:32:27.544068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-12 17:32:27.544087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-12 17:32:27.544106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.056 [2024-07-12 17:32:27.544125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.056 [2024-07-12 17:32:27.544137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.544144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.544301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.544313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.549966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.549972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.057 [2024-07-12 17:32:27.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.550985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.550991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.551003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.551010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.551022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.551029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.057 [2024-07-12 17:32:27.551041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.057 [2024-07-12 17:32:27.551047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.551991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.551998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.552010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.552016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.552029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.552037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.058 [2024-07-12 17:32:27.552766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.058 [2024-07-12 17:32:27.552782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.552985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.552992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.059 [2024-07-12 17:32:27.553908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.059 [2024-07-12 17:32:27.553966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.059 [2024-07-12 17:32:27.553978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.553985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.553997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.060 [2024-07-12 17:32:27.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.554987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.554994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.060 [2024-07-12 17:32:27.555598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.060 [2024-07-12 17:32:27.555606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.555982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.555994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.061 [2024-07-12 17:32:27.556986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.556999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.061 [2024-07-12 17:32:27.557005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.557020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.061 [2024-07-12 17:32:27.557028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.061 [2024-07-12 17:32:27.557040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.061 [2024-07-12 17:32:27.557047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.557821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.557990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.062 [2024-07-12 17:32:27.558154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.062 [2024-07-12 17:32:27.558679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.062 [2024-07-12 17:32:27.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.558981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.558990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.063 [2024-07-12 17:32:27.559662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.063 [2024-07-12 17:32:27.559669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.559891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.559899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.064 [2024-07-12 17:32:27.560884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.560974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.560981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.064 [2024-07-12 17:32:27.564942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.064 [2024-07-12 17:32:27.564955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.564965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.564978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.564997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.065 [2024-07-12 17:32:27.565394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.065 [2024-07-12 17:32:27.565948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.065 [2024-07-12 17:32:27.565955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.565967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.565986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.565993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.566607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.066 [2024-07-12 17:32:27.567491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.066 [2024-07-12 17:32:27.567498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.567881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.567990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.567996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.067 [2024-07-12 17:32:27.568600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.067 [2024-07-12 17:32:27.568851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.067 [2024-07-12 17:32:27.568863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.068 [2024-07-12 17:32:27.568871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.068 [2024-07-12 17:32:27.568890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.568909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.068 [2024-07-12 17:32:27.568928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.568947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.568966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.568985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.568997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.569894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.569900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.068 [2024-07-12 17:32:27.570575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.068 [2024-07-12 17:32:27.570582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.570973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.069 [2024-07-12 17:32:27.571830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.571986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.069 [2024-07-12 17:32:27.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.069 [2024-07-12 17:32:27.572006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.070 [2024-07-12 17:32:27.572138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.070 [2024-07-12 17:32:27.572144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.071 [2024-07-12 17:32:27.572862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.572933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.572939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.071 [2024-07-12 17:32:27.573545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.071 [2024-07-12 17:32:27.573557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.573981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.573988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.574671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.574678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.575098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.575110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.575124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.575131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.575143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.072 [2024-07-12 17:32:27.575150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.072 [2024-07-12 17:32:27.575162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.575562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.575849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.575856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.576197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.576217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.576236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.073 [2024-07-12 17:32:27.576256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.073 [2024-07-12 17:32:27.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.073 [2024-07-12 17:32:27.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.074 [2024-07-12 17:32:27.576515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.074 [2024-07-12 17:32:27.576533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.074 [2024-07-12 17:32:27.576554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.576573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.074 [2024-07-12 17:32:27.576592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.576611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.576631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.576990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.074 [2024-07-12 17:32:27.577797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.074 [2024-07-12 17:32:27.577803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.577984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.577992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.578984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.578990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.075 [2024-07-12 17:32:27.579126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.075 [2024-07-12 17:32:27.579145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.075 [2024-07-12 17:32:27.579164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.075 [2024-07-12 17:32:27.579183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.075 [2024-07-12 17:32:27.579195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.579947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.579985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.579998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.076 [2024-07-12 17:32:27.580269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.076 [2024-07-12 17:32:27.580828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.076 [2024-07-12 17:32:27.580835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.580987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.580999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.077 [2024-07-12 17:32:27.581960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.077 [2024-07-12 17:32:27.581966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.581978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.581986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.581998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.582875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.582986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.582993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.078 [2024-07-12 17:32:27.583012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.078 [2024-07-12 17:32:27.583617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.078 [2024-07-12 17:32:27.583630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.583638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.583658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.583677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.583698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.583719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.583982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.583989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.584008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.079 [2024-07-12 17:32:27.584047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.584982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.584989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.585002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.079 [2024-07-12 17:32:27.585022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.079 [2024-07-12 17:32:27.585029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.585979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.585986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.080 [2024-07-12 17:32:27.586196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.080 [2024-07-12 17:32:27.586212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.586483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.586999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.081 [2024-07-12 17:32:27.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.587049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.587075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.587100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.587125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.587144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.587151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.081 [2024-07-12 17:32:27.590741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.081 [2024-07-12 17:32:27.590760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:27.590766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:27.590792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:27.590817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:27.590843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:27.590868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:27.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:27.590913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:27.590920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.082 [2024-07-12 17:32:40.426500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.426645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.426651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.427385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.427402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.427422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.082 [2024-07-12 17:32:40.427429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:24.082 [2024-07-12 17:32:40.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.427961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.427968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.428549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.428573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.428593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.428612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.083 [2024-07-12 17:32:40.428631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.428650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.428670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.428695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.083 [2024-07-12 17:32:40.428708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.083 [2024-07-12 17:32:40.428715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.083 Received shutdown signal, test time was about 27.133073 seconds 00:24:24.083 00:24:24.083 Latency(us) 00:24:24.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.083 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.083 Verification LBA range: start 0x0 length 0x4000 00:24:24.083 Nvme0n1 : 27.13 10330.01 40.35 0.00 0.00 12370.88 309.87 3078254.41 00:24:24.084 =================================================================================================================== 00:24:24.084 Total : 10330.01 40.35 0.00 0.00 12370.88 309.87 3078254.41 00:24:24.084 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:24.343 17:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:24.343 rmmod nvme_tcp 00:24:24.343 rmmod nvme_fabrics 00:24:24.343 rmmod nvme_keyring 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4175059 ']' 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4175059 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 4175059 ']' 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 4175059 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4175059 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4175059' 00:24:24.343 killing process with pid 4175059 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 4175059 00:24:24.343 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 4175059 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.602 17:32:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.136 17:32:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.136 00:24:27.136 real 0m39.053s 00:24:27.136 user 1m46.493s 00:24:27.136 sys 0m10.215s 00:24:27.136 17:32:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:27.136 17:32:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 ************************************ 00:24:27.136 END TEST nvmf_host_multipath_status 00:24:27.136 ************************************ 00:24:27.136 17:32:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:27.136 17:32:45 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:27.136 17:32:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:27.136 17:32:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:27.136 17:32:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 ************************************ 00:24:27.136 START TEST nvmf_discovery_remove_ifc 00:24:27.136 ************************************ 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:27.136 * Looking for test storage... 00:24:27.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.136 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:27.137 17:32:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:32.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:32.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:32.406 Found net devices under 0000:86:00.0: cvl_0_0 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:32.406 Found net devices under 0000:86:00.1: cvl_0_1 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.406 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:24:32.407 00:24:32.407 --- 10.0.0.2 ping statistics --- 00:24:32.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.407 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:24:32.407 00:24:32.407 --- 10.0.0.1 ping statistics --- 00:24:32.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.407 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4183798 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4183798 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4183798 ']' 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.407 17:32:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.407 [2024-07-12 17:32:50.732443] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:24:32.407 [2024-07-12 17:32:50.732484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.407 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.407 [2024-07-12 17:32:50.789562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.407 [2024-07-12 17:32:50.867956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.407 [2024-07-12 17:32:50.867990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.407 [2024-07-12 17:32:50.867998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.407 [2024-07-12 17:32:50.868006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.407 [2024-07-12 17:32:50.868011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.407 [2024-07-12 17:32:50.868029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.006 [2024-07-12 17:32:51.571154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.006 [2024-07-12 17:32:51.579267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:33.006 null0 00:24:33.006 [2024-07-12 17:32:51.611291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4183920 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4183920 /tmp/host.sock 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 4183920 ']' 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:33.006 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.006 17:32:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.006 [2024-07-12 17:32:51.664635] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:24:33.006 [2024-07-12 17:32:51.664674] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4183920 ] 00:24:33.006 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.006 [2024-07-12 17:32:51.714724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.274 [2024-07-12 17:32:51.794026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.842 17:32:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.219 [2024-07-12 17:32:53.620447] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:35.219 [2024-07-12 17:32:53.620467] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:35.219 [2024-07-12 17:32:53.620479] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:35.219 [2024-07-12 17:32:53.706740] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:35.219 [2024-07-12 17:32:53.770477] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:35.219 [2024-07-12 17:32:53.770522] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:35.219 [2024-07-12 17:32:53.770541] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:35.219 [2024-07-12 17:32:53.770554] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:35.219 [2024-07-12 17:32:53.770572] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:35.219 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.219 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:35.219 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.220 [2024-07-12 17:32:53.778151] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6cce30 was disconnected and freed. delete nvme_qpair. 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:35.220 17:32:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:36.599 17:32:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.599 17:32:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:36.599 17:32:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.536 17:32:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.473 17:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.425 17:32:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.803 [2024-07-12 17:32:59.212083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:40.803 [2024-07-12 17:32:59.212123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.803 [2024-07-12 17:32:59.212134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.803 [2024-07-12 17:32:59.212160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.803 [2024-07-12 17:32:59.212167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.803 [2024-07-12 17:32:59.212174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.803 [2024-07-12 17:32:59.212180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.803 [2024-07-12 17:32:59.212187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.803 [2024-07-12 17:32:59.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.803 [2024-07-12 17:32:59.212201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.803 [2024-07-12 17:32:59.212208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.803 [2024-07-12 17:32:59.212214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693690 is same with the state(5) to be set 00:24:40.803 [2024-07-12 17:32:59.222106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693690 (9): Bad file descriptor 00:24:40.803 [2024-07-12 17:32:59.232144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.803 17:32:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.742 [2024-07-12 17:33:00.266407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:41.742 [2024-07-12 17:33:00.266465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x693690 with addr=10.0.0.2, port=4420 00:24:41.742 [2024-07-12 17:33:00.266497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x693690 is same with the state(5) to be set 00:24:41.742 [2024-07-12 17:33:00.266533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x693690 (9): Bad file descriptor 00:24:41.742 [2024-07-12 17:33:00.266967] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:41.742 [2024-07-12 17:33:00.266988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:41.742 [2024-07-12 17:33:00.266997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:41.742 [2024-07-12 17:33:00.267008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:41.742 [2024-07-12 17:33:00.267031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.742 [2024-07-12 17:33:00.267043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.742 17:33:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.679 [2024-07-12 17:33:01.269531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.679 [2024-07-12 17:33:01.269556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.679 [2024-07-12 17:33:01.269564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.679 [2024-07-12 17:33:01.269572] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:42.679 [2024-07-12 17:33:01.269603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.680 [2024-07-12 17:33:01.269622] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:42.680 [2024-07-12 17:33:01.269646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.680 [2024-07-12 17:33:01.269657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.680 [2024-07-12 17:33:01.269668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.680 [2024-07-12 17:33:01.269674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.680 [2024-07-12 17:33:01.269681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.680 [2024-07-12 17:33:01.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.680 [2024-07-12 17:33:01.269699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.680 [2024-07-12 17:33:01.269705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.680 [2024-07-12 17:33:01.269713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.680 [2024-07-12 17:33:01.269719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.680 [2024-07-12 17:33:01.269726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:42.680 [2024-07-12 17:33:01.269780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x692a80 (9): Bad file descriptor 00:24:42.680 [2024-07-12 17:33:01.270800] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:42.680 [2024-07-12 17:33:01.270809] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.680 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.939 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:42.939 17:33:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:43.876 17:33:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.812 [2024-07-12 17:33:03.284053] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.812 [2024-07-12 17:33:03.284073] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.812 [2024-07-12 17:33:03.284085] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.812 [2024-07-12 17:33:03.413507] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:44.812 [2024-07-12 17:33:03.475753] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:44.812 [2024-07-12 17:33:03.475788] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:44.812 [2024-07-12 17:33:03.475806] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:44.812 [2024-07-12 17:33:03.475818] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:44.812 [2024-07-12 17:33:03.475824] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:44.812 [2024-07-12 17:33:03.483357] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6a98d0 was disconnected and freed. delete nvme_qpair. 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4183920 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4183920 ']' 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4183920 00:24:44.812 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4183920 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4183920' 00:24:45.071 killing process with pid 4183920 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4183920 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4183920 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.071 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.071 rmmod nvme_tcp 00:24:45.071 rmmod nvme_fabrics 00:24:45.071 rmmod nvme_keyring 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4183798 ']' 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4183798 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 4183798 ']' 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 4183798 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4183798 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4183798' 00:24:45.330 killing process with pid 4183798 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 4183798 00:24:45.330 17:33:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 4183798 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.330 17:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.867 17:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.867 00:24:47.867 real 0m20.733s 00:24:47.867 user 0m26.627s 00:24:47.867 sys 0m4.968s 00:24:47.867 17:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.867 17:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.867 ************************************ 00:24:47.867 END TEST nvmf_discovery_remove_ifc 00:24:47.867 ************************************ 00:24:47.867 17:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:47.867 17:33:06 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:47.867 17:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:47.867 17:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.867 17:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:47.867 ************************************ 00:24:47.867 START TEST nvmf_identify_kernel_target 00:24:47.867 ************************************ 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:47.867 * Looking for test storage... 00:24:47.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.867 17:33:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:53.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:53.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.141 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:53.142 Found net devices under 0000:86:00.0: cvl_0_0 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:53.142 Found net devices under 0000:86:00.1: cvl_0_1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:24:53.142 00:24:53.142 --- 10.0.0.2 ping statistics --- 00:24:53.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.142 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:24:53.142 00:24:53.142 --- 10.0.0.1 ping statistics --- 00:24:53.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.142 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:53.142 17:33:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:55.676 Waiting for block devices as requested 00:24:55.676 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:55.676 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:55.676 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:55.935 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:55.935 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:55.935 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:55.935 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:56.195 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:56.195 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:56.195 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:56.195 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:56.454 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:56.454 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:56.454 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:56.713 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:56.713 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:56.713 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:56.713 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:56.972 No valid GPT data, bailing 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:56.972 00:24:56.972 Discovery Log Number of Records 2, Generation counter 2 00:24:56.972 =====Discovery Log Entry 0====== 00:24:56.972 trtype: tcp 00:24:56.972 adrfam: ipv4 00:24:56.972 subtype: current discovery subsystem 00:24:56.972 treq: not specified, sq flow control disable supported 00:24:56.972 portid: 1 00:24:56.972 trsvcid: 4420 00:24:56.972 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:56.972 traddr: 10.0.0.1 00:24:56.972 eflags: none 00:24:56.972 sectype: none 00:24:56.972 =====Discovery Log Entry 1====== 00:24:56.972 trtype: tcp 00:24:56.972 adrfam: ipv4 00:24:56.972 subtype: nvme subsystem 00:24:56.972 treq: not specified, sq flow control disable supported 00:24:56.972 portid: 1 00:24:56.972 trsvcid: 4420 00:24:56.972 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:56.972 traddr: 10.0.0.1 00:24:56.972 eflags: none 00:24:56.972 sectype: none 00:24:56.972 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:56.972 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:56.972 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.972 ===================================================== 00:24:56.972 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:56.972 ===================================================== 00:24:56.972 Controller Capabilities/Features 00:24:56.972 ================================ 00:24:56.972 Vendor ID: 0000 00:24:56.972 Subsystem Vendor ID: 0000 00:24:56.972 Serial Number: d051fd8cc0d38707a69d 00:24:56.972 Model Number: Linux 00:24:56.972 Firmware Version: 6.7.0-68 00:24:56.972 Recommended Arb Burst: 0 00:24:56.972 IEEE OUI Identifier: 00 00 00 00:24:56.973 Multi-path I/O 00:24:56.973 May have multiple subsystem ports: No 00:24:56.973 May have multiple controllers: No 00:24:56.973 Associated with SR-IOV VF: No 00:24:56.973 Max Data Transfer Size: Unlimited 00:24:56.973 Max Number of Namespaces: 0 00:24:56.973 Max Number of I/O Queues: 1024 00:24:56.973 NVMe Specification Version (VS): 1.3 00:24:56.973 NVMe Specification Version (Identify): 1.3 00:24:56.973 Maximum Queue Entries: 1024 00:24:56.973 Contiguous Queues Required: No 00:24:56.973 Arbitration Mechanisms Supported 00:24:56.973 Weighted Round Robin: Not Supported 00:24:56.973 Vendor Specific: Not Supported 00:24:56.973 Reset Timeout: 7500 ms 00:24:56.973 Doorbell Stride: 4 bytes 00:24:56.973 NVM Subsystem Reset: Not Supported 00:24:56.973 Command Sets Supported 00:24:56.973 NVM Command Set: Supported 00:24:56.973 Boot Partition: Not Supported 00:24:56.973 Memory Page Size Minimum: 4096 bytes 00:24:56.973 Memory Page Size Maximum: 4096 bytes 00:24:56.973 Persistent Memory Region: Not Supported 00:24:56.973 Optional Asynchronous Events Supported 00:24:56.973 Namespace Attribute Notices: Not Supported 00:24:56.973 Firmware Activation Notices: Not Supported 00:24:56.973 ANA Change Notices: Not Supported 00:24:56.973 PLE Aggregate Log Change Notices: Not Supported 00:24:56.973 LBA Status Info Alert Notices: Not Supported 00:24:56.973 EGE Aggregate Log Change Notices: Not Supported 00:24:56.973 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.973 Zone Descriptor Change Notices: Not Supported 00:24:56.973 Discovery Log Change Notices: Supported 00:24:56.973 Controller Attributes 00:24:56.973 128-bit Host Identifier: Not Supported 00:24:56.973 Non-Operational Permissive Mode: Not Supported 00:24:56.973 NVM Sets: Not Supported 00:24:56.973 Read Recovery Levels: Not Supported 00:24:56.973 Endurance Groups: Not Supported 00:24:56.973 Predictable Latency Mode: Not Supported 00:24:56.973 Traffic Based Keep ALive: Not Supported 00:24:56.973 Namespace Granularity: Not Supported 00:24:56.973 SQ Associations: Not Supported 00:24:56.973 UUID List: Not Supported 00:24:56.973 Multi-Domain Subsystem: Not Supported 00:24:56.973 Fixed Capacity Management: Not Supported 00:24:56.973 Variable Capacity Management: Not Supported 00:24:56.973 Delete Endurance Group: Not Supported 00:24:56.973 Delete NVM Set: Not Supported 00:24:56.973 Extended LBA Formats Supported: Not Supported 00:24:56.973 Flexible Data Placement Supported: Not Supported 00:24:56.973 00:24:56.973 Controller Memory Buffer Support 00:24:56.973 ================================ 00:24:56.973 Supported: No 00:24:56.973 00:24:56.973 Persistent Memory Region Support 00:24:56.973 ================================ 00:24:56.973 Supported: No 00:24:56.973 00:24:56.973 Admin Command Set Attributes 00:24:56.973 ============================ 00:24:56.973 Security Send/Receive: Not Supported 00:24:56.973 Format NVM: Not Supported 00:24:56.973 Firmware Activate/Download: Not Supported 00:24:56.973 Namespace Management: Not Supported 00:24:56.973 Device Self-Test: Not Supported 00:24:56.973 Directives: Not Supported 00:24:56.973 NVMe-MI: Not Supported 00:24:56.973 Virtualization Management: Not Supported 00:24:56.973 Doorbell Buffer Config: Not Supported 00:24:56.973 Get LBA Status Capability: Not Supported 00:24:56.973 Command & Feature Lockdown Capability: Not Supported 00:24:56.973 Abort Command Limit: 1 00:24:56.973 Async Event Request Limit: 1 00:24:56.973 Number of Firmware Slots: N/A 00:24:56.973 Firmware Slot 1 Read-Only: N/A 00:24:56.973 Firmware Activation Without Reset: N/A 00:24:56.973 Multiple Update Detection Support: N/A 00:24:56.973 Firmware Update Granularity: No Information Provided 00:24:56.973 Per-Namespace SMART Log: No 00:24:56.973 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.973 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:56.973 Command Effects Log Page: Not Supported 00:24:56.973 Get Log Page Extended Data: Supported 00:24:56.973 Telemetry Log Pages: Not Supported 00:24:56.973 Persistent Event Log Pages: Not Supported 00:24:56.973 Supported Log Pages Log Page: May Support 00:24:56.973 Commands Supported & Effects Log Page: Not Supported 00:24:56.973 Feature Identifiers & Effects Log Page:May Support 00:24:56.973 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.973 Data Area 4 for Telemetry Log: Not Supported 00:24:56.973 Error Log Page Entries Supported: 1 00:24:56.973 Keep Alive: Not Supported 00:24:56.973 00:24:56.973 NVM Command Set Attributes 00:24:56.973 ========================== 00:24:56.973 Submission Queue Entry Size 00:24:56.973 Max: 1 00:24:56.973 Min: 1 00:24:56.973 Completion Queue Entry Size 00:24:56.973 Max: 1 00:24:56.973 Min: 1 00:24:56.973 Number of Namespaces: 0 00:24:56.973 Compare Command: Not Supported 00:24:56.973 Write Uncorrectable Command: Not Supported 00:24:56.973 Dataset Management Command: Not Supported 00:24:56.973 Write Zeroes Command: Not Supported 00:24:56.973 Set Features Save Field: Not Supported 00:24:56.973 Reservations: Not Supported 00:24:56.973 Timestamp: Not Supported 00:24:56.973 Copy: Not Supported 00:24:56.973 Volatile Write Cache: Not Present 00:24:56.973 Atomic Write Unit (Normal): 1 00:24:56.973 Atomic Write Unit (PFail): 1 00:24:56.973 Atomic Compare & Write Unit: 1 00:24:56.973 Fused Compare & Write: Not Supported 00:24:56.973 Scatter-Gather List 00:24:56.973 SGL Command Set: Supported 00:24:56.973 SGL Keyed: Not Supported 00:24:56.973 SGL Bit Bucket Descriptor: Not Supported 00:24:56.973 SGL Metadata Pointer: Not Supported 00:24:56.973 Oversized SGL: Not Supported 00:24:56.973 SGL Metadata Address: Not Supported 00:24:56.973 SGL Offset: Supported 00:24:56.973 Transport SGL Data Block: Not Supported 00:24:56.973 Replay Protected Memory Block: Not Supported 00:24:56.973 00:24:56.973 Firmware Slot Information 00:24:56.973 ========================= 00:24:56.973 Active slot: 0 00:24:56.973 00:24:56.973 00:24:56.973 Error Log 00:24:56.973 ========= 00:24:56.973 00:24:56.973 Active Namespaces 00:24:56.973 ================= 00:24:56.973 Discovery Log Page 00:24:56.973 ================== 00:24:56.973 Generation Counter: 2 00:24:56.973 Number of Records: 2 00:24:56.973 Record Format: 0 00:24:56.973 00:24:56.973 Discovery Log Entry 0 00:24:56.973 ---------------------- 00:24:56.973 Transport Type: 3 (TCP) 00:24:56.973 Address Family: 1 (IPv4) 00:24:56.973 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:56.973 Entry Flags: 00:24:56.973 Duplicate Returned Information: 0 00:24:56.973 Explicit Persistent Connection Support for Discovery: 0 00:24:56.973 Transport Requirements: 00:24:56.973 Secure Channel: Not Specified 00:24:56.973 Port ID: 1 (0x0001) 00:24:56.973 Controller ID: 65535 (0xffff) 00:24:56.973 Admin Max SQ Size: 32 00:24:56.973 Transport Service Identifier: 4420 00:24:56.973 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:56.973 Transport Address: 10.0.0.1 00:24:56.973 Discovery Log Entry 1 00:24:56.973 ---------------------- 00:24:56.973 Transport Type: 3 (TCP) 00:24:56.973 Address Family: 1 (IPv4) 00:24:56.973 Subsystem Type: 2 (NVM Subsystem) 00:24:56.973 Entry Flags: 00:24:56.973 Duplicate Returned Information: 0 00:24:56.973 Explicit Persistent Connection Support for Discovery: 0 00:24:56.973 Transport Requirements: 00:24:56.973 Secure Channel: Not Specified 00:24:56.973 Port ID: 1 (0x0001) 00:24:56.973 Controller ID: 65535 (0xffff) 00:24:56.973 Admin Max SQ Size: 32 00:24:56.973 Transport Service Identifier: 4420 00:24:56.973 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:56.973 Transport Address: 10.0.0.1 00:24:56.973 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:57.233 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.233 get_feature(0x01) failed 00:24:57.233 get_feature(0x02) failed 00:24:57.233 get_feature(0x04) failed 00:24:57.233 ===================================================== 00:24:57.233 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:57.233 ===================================================== 00:24:57.233 Controller Capabilities/Features 00:24:57.233 ================================ 00:24:57.233 Vendor ID: 0000 00:24:57.233 Subsystem Vendor ID: 0000 00:24:57.233 Serial Number: b432318b8451ae8c552e 00:24:57.233 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:57.233 Firmware Version: 6.7.0-68 00:24:57.233 Recommended Arb Burst: 6 00:24:57.233 IEEE OUI Identifier: 00 00 00 00:24:57.233 Multi-path I/O 00:24:57.233 May have multiple subsystem ports: Yes 00:24:57.233 May have multiple controllers: Yes 00:24:57.233 Associated with SR-IOV VF: No 00:24:57.233 Max Data Transfer Size: Unlimited 00:24:57.233 Max Number of Namespaces: 1024 00:24:57.233 Max Number of I/O Queues: 128 00:24:57.233 NVMe Specification Version (VS): 1.3 00:24:57.233 NVMe Specification Version (Identify): 1.3 00:24:57.233 Maximum Queue Entries: 1024 00:24:57.233 Contiguous Queues Required: No 00:24:57.233 Arbitration Mechanisms Supported 00:24:57.233 Weighted Round Robin: Not Supported 00:24:57.233 Vendor Specific: Not Supported 00:24:57.233 Reset Timeout: 7500 ms 00:24:57.233 Doorbell Stride: 4 bytes 00:24:57.233 NVM Subsystem Reset: Not Supported 00:24:57.233 Command Sets Supported 00:24:57.233 NVM Command Set: Supported 00:24:57.233 Boot Partition: Not Supported 00:24:57.233 Memory Page Size Minimum: 4096 bytes 00:24:57.233 Memory Page Size Maximum: 4096 bytes 00:24:57.233 Persistent Memory Region: Not Supported 00:24:57.233 Optional Asynchronous Events Supported 00:24:57.233 Namespace Attribute Notices: Supported 00:24:57.233 Firmware Activation Notices: Not Supported 00:24:57.233 ANA Change Notices: Supported 00:24:57.233 PLE Aggregate Log Change Notices: Not Supported 00:24:57.233 LBA Status Info Alert Notices: Not Supported 00:24:57.233 EGE Aggregate Log Change Notices: Not Supported 00:24:57.233 Normal NVM Subsystem Shutdown event: Not Supported 00:24:57.233 Zone Descriptor Change Notices: Not Supported 00:24:57.233 Discovery Log Change Notices: Not Supported 00:24:57.233 Controller Attributes 00:24:57.233 128-bit Host Identifier: Supported 00:24:57.233 Non-Operational Permissive Mode: Not Supported 00:24:57.233 NVM Sets: Not Supported 00:24:57.233 Read Recovery Levels: Not Supported 00:24:57.233 Endurance Groups: Not Supported 00:24:57.233 Predictable Latency Mode: Not Supported 00:24:57.233 Traffic Based Keep ALive: Supported 00:24:57.233 Namespace Granularity: Not Supported 00:24:57.233 SQ Associations: Not Supported 00:24:57.233 UUID List: Not Supported 00:24:57.233 Multi-Domain Subsystem: Not Supported 00:24:57.233 Fixed Capacity Management: Not Supported 00:24:57.233 Variable Capacity Management: Not Supported 00:24:57.233 Delete Endurance Group: Not Supported 00:24:57.233 Delete NVM Set: Not Supported 00:24:57.233 Extended LBA Formats Supported: Not Supported 00:24:57.233 Flexible Data Placement Supported: Not Supported 00:24:57.233 00:24:57.233 Controller Memory Buffer Support 00:24:57.233 ================================ 00:24:57.233 Supported: No 00:24:57.233 00:24:57.233 Persistent Memory Region Support 00:24:57.233 ================================ 00:24:57.233 Supported: No 00:24:57.233 00:24:57.233 Admin Command Set Attributes 00:24:57.233 ============================ 00:24:57.233 Security Send/Receive: Not Supported 00:24:57.233 Format NVM: Not Supported 00:24:57.233 Firmware Activate/Download: Not Supported 00:24:57.233 Namespace Management: Not Supported 00:24:57.233 Device Self-Test: Not Supported 00:24:57.234 Directives: Not Supported 00:24:57.234 NVMe-MI: Not Supported 00:24:57.234 Virtualization Management: Not Supported 00:24:57.234 Doorbell Buffer Config: Not Supported 00:24:57.234 Get LBA Status Capability: Not Supported 00:24:57.234 Command & Feature Lockdown Capability: Not Supported 00:24:57.234 Abort Command Limit: 4 00:24:57.234 Async Event Request Limit: 4 00:24:57.234 Number of Firmware Slots: N/A 00:24:57.234 Firmware Slot 1 Read-Only: N/A 00:24:57.234 Firmware Activation Without Reset: N/A 00:24:57.234 Multiple Update Detection Support: N/A 00:24:57.234 Firmware Update Granularity: No Information Provided 00:24:57.234 Per-Namespace SMART Log: Yes 00:24:57.234 Asymmetric Namespace Access Log Page: Supported 00:24:57.234 ANA Transition Time : 10 sec 00:24:57.234 00:24:57.234 Asymmetric Namespace Access Capabilities 00:24:57.234 ANA Optimized State : Supported 00:24:57.234 ANA Non-Optimized State : Supported 00:24:57.234 ANA Inaccessible State : Supported 00:24:57.234 ANA Persistent Loss State : Supported 00:24:57.234 ANA Change State : Supported 00:24:57.234 ANAGRPID is not changed : No 00:24:57.234 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:57.234 00:24:57.234 ANA Group Identifier Maximum : 128 00:24:57.234 Number of ANA Group Identifiers : 128 00:24:57.234 Max Number of Allowed Namespaces : 1024 00:24:57.234 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:57.234 Command Effects Log Page: Supported 00:24:57.234 Get Log Page Extended Data: Supported 00:24:57.234 Telemetry Log Pages: Not Supported 00:24:57.234 Persistent Event Log Pages: Not Supported 00:24:57.234 Supported Log Pages Log Page: May Support 00:24:57.234 Commands Supported & Effects Log Page: Not Supported 00:24:57.234 Feature Identifiers & Effects Log Page:May Support 00:24:57.234 NVMe-MI Commands & Effects Log Page: May Support 00:24:57.234 Data Area 4 for Telemetry Log: Not Supported 00:24:57.234 Error Log Page Entries Supported: 128 00:24:57.234 Keep Alive: Supported 00:24:57.234 Keep Alive Granularity: 1000 ms 00:24:57.234 00:24:57.234 NVM Command Set Attributes 00:24:57.234 ========================== 00:24:57.234 Submission Queue Entry Size 00:24:57.234 Max: 64 00:24:57.234 Min: 64 00:24:57.234 Completion Queue Entry Size 00:24:57.234 Max: 16 00:24:57.234 Min: 16 00:24:57.234 Number of Namespaces: 1024 00:24:57.234 Compare Command: Not Supported 00:24:57.234 Write Uncorrectable Command: Not Supported 00:24:57.234 Dataset Management Command: Supported 00:24:57.234 Write Zeroes Command: Supported 00:24:57.234 Set Features Save Field: Not Supported 00:24:57.234 Reservations: Not Supported 00:24:57.234 Timestamp: Not Supported 00:24:57.234 Copy: Not Supported 00:24:57.234 Volatile Write Cache: Present 00:24:57.234 Atomic Write Unit (Normal): 1 00:24:57.234 Atomic Write Unit (PFail): 1 00:24:57.234 Atomic Compare & Write Unit: 1 00:24:57.234 Fused Compare & Write: Not Supported 00:24:57.234 Scatter-Gather List 00:24:57.234 SGL Command Set: Supported 00:24:57.234 SGL Keyed: Not Supported 00:24:57.234 SGL Bit Bucket Descriptor: Not Supported 00:24:57.234 SGL Metadata Pointer: Not Supported 00:24:57.234 Oversized SGL: Not Supported 00:24:57.234 SGL Metadata Address: Not Supported 00:24:57.234 SGL Offset: Supported 00:24:57.234 Transport SGL Data Block: Not Supported 00:24:57.234 Replay Protected Memory Block: Not Supported 00:24:57.234 00:24:57.234 Firmware Slot Information 00:24:57.234 ========================= 00:24:57.234 Active slot: 0 00:24:57.234 00:24:57.234 Asymmetric Namespace Access 00:24:57.234 =========================== 00:24:57.234 Change Count : 0 00:24:57.234 Number of ANA Group Descriptors : 1 00:24:57.234 ANA Group Descriptor : 0 00:24:57.234 ANA Group ID : 1 00:24:57.234 Number of NSID Values : 1 00:24:57.234 Change Count : 0 00:24:57.234 ANA State : 1 00:24:57.234 Namespace Identifier : 1 00:24:57.234 00:24:57.234 Commands Supported and Effects 00:24:57.234 ============================== 00:24:57.234 Admin Commands 00:24:57.234 -------------- 00:24:57.234 Get Log Page (02h): Supported 00:24:57.234 Identify (06h): Supported 00:24:57.234 Abort (08h): Supported 00:24:57.234 Set Features (09h): Supported 00:24:57.234 Get Features (0Ah): Supported 00:24:57.234 Asynchronous Event Request (0Ch): Supported 00:24:57.234 Keep Alive (18h): Supported 00:24:57.234 I/O Commands 00:24:57.234 ------------ 00:24:57.234 Flush (00h): Supported 00:24:57.234 Write (01h): Supported LBA-Change 00:24:57.234 Read (02h): Supported 00:24:57.234 Write Zeroes (08h): Supported LBA-Change 00:24:57.234 Dataset Management (09h): Supported 00:24:57.234 00:24:57.234 Error Log 00:24:57.234 ========= 00:24:57.234 Entry: 0 00:24:57.234 Error Count: 0x3 00:24:57.234 Submission Queue Id: 0x0 00:24:57.234 Command Id: 0x5 00:24:57.234 Phase Bit: 0 00:24:57.234 Status Code: 0x2 00:24:57.234 Status Code Type: 0x0 00:24:57.234 Do Not Retry: 1 00:24:57.234 Error Location: 0x28 00:24:57.234 LBA: 0x0 00:24:57.234 Namespace: 0x0 00:24:57.234 Vendor Log Page: 0x0 00:24:57.234 ----------- 00:24:57.234 Entry: 1 00:24:57.234 Error Count: 0x2 00:24:57.234 Submission Queue Id: 0x0 00:24:57.234 Command Id: 0x5 00:24:57.234 Phase Bit: 0 00:24:57.234 Status Code: 0x2 00:24:57.234 Status Code Type: 0x0 00:24:57.234 Do Not Retry: 1 00:24:57.234 Error Location: 0x28 00:24:57.234 LBA: 0x0 00:24:57.234 Namespace: 0x0 00:24:57.234 Vendor Log Page: 0x0 00:24:57.234 ----------- 00:24:57.234 Entry: 2 00:24:57.234 Error Count: 0x1 00:24:57.234 Submission Queue Id: 0x0 00:24:57.234 Command Id: 0x4 00:24:57.234 Phase Bit: 0 00:24:57.234 Status Code: 0x2 00:24:57.234 Status Code Type: 0x0 00:24:57.234 Do Not Retry: 1 00:24:57.234 Error Location: 0x28 00:24:57.234 LBA: 0x0 00:24:57.234 Namespace: 0x0 00:24:57.234 Vendor Log Page: 0x0 00:24:57.234 00:24:57.234 Number of Queues 00:24:57.234 ================ 00:24:57.234 Number of I/O Submission Queues: 128 00:24:57.234 Number of I/O Completion Queues: 128 00:24:57.234 00:24:57.234 ZNS Specific Controller Data 00:24:57.234 ============================ 00:24:57.234 Zone Append Size Limit: 0 00:24:57.234 00:24:57.234 00:24:57.234 Active Namespaces 00:24:57.234 ================= 00:24:57.234 get_feature(0x05) failed 00:24:57.234 Namespace ID:1 00:24:57.234 Command Set Identifier: NVM (00h) 00:24:57.234 Deallocate: Supported 00:24:57.234 Deallocated/Unwritten Error: Not Supported 00:24:57.234 Deallocated Read Value: Unknown 00:24:57.234 Deallocate in Write Zeroes: Not Supported 00:24:57.234 Deallocated Guard Field: 0xFFFF 00:24:57.234 Flush: Supported 00:24:57.234 Reservation: Not Supported 00:24:57.234 Namespace Sharing Capabilities: Multiple Controllers 00:24:57.234 Size (in LBAs): 1953525168 (931GiB) 00:24:57.234 Capacity (in LBAs): 1953525168 (931GiB) 00:24:57.234 Utilization (in LBAs): 1953525168 (931GiB) 00:24:57.234 UUID: f60cef96-2fcd-45f1-b569-a0d7982ce849 00:24:57.234 Thin Provisioning: Not Supported 00:24:57.234 Per-NS Atomic Units: Yes 00:24:57.234 Atomic Boundary Size (Normal): 0 00:24:57.234 Atomic Boundary Size (PFail): 0 00:24:57.234 Atomic Boundary Offset: 0 00:24:57.234 NGUID/EUI64 Never Reused: No 00:24:57.234 ANA group ID: 1 00:24:57.234 Namespace Write Protected: No 00:24:57.234 Number of LBA Formats: 1 00:24:57.234 Current LBA Format: LBA Format #00 00:24:57.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:57.234 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.234 rmmod nvme_tcp 00:24:57.234 rmmod nvme_fabrics 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.234 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.235 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.235 17:33:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:59.769 17:33:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:01.741 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:01.741 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:02.001 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:02.939 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:02.939 00:25:02.939 real 0m15.345s 00:25:02.939 user 0m3.653s 00:25:02.940 sys 0m8.019s 00:25:02.940 17:33:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.940 17:33:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.940 ************************************ 00:25:02.940 END TEST nvmf_identify_kernel_target 00:25:02.940 ************************************ 00:25:02.940 17:33:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:02.940 17:33:21 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:02.940 17:33:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:02.940 17:33:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.940 17:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.940 ************************************ 00:25:02.940 START TEST nvmf_auth_host 00:25:02.940 ************************************ 00:25:02.940 17:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:03.199 * Looking for test storage... 00:25:03.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.200 17:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:08.472 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:08.472 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:08.472 Found net devices under 0000:86:00.0: cvl_0_0 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:08.472 Found net devices under 0000:86:00.1: cvl_0_1 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.472 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:25:08.473 00:25:08.473 --- 10.0.0.2 ping statistics --- 00:25:08.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.473 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:08.473 17:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:08.473 00:25:08.473 --- 10.0.0.1 ping statistics --- 00:25:08.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.473 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2475 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2475 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2475 ']' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.473 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.408 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8987008824d635abc776b43594609862 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eYk 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8987008824d635abc776b43594609862 0 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8987008824d635abc776b43594609862 0 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8987008824d635abc776b43594609862 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eYk 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eYk 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eYk 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d14d13f6e8658d48399be90ce7a34edb27a491d87099454e3221ff46a3b75fa 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.acz 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d14d13f6e8658d48399be90ce7a34edb27a491d87099454e3221ff46a3b75fa 3 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d14d13f6e8658d48399be90ce7a34edb27a491d87099454e3221ff46a3b75fa 3 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d14d13f6e8658d48399be90ce7a34edb27a491d87099454e3221ff46a3b75fa 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:09.409 17:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.acz 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.acz 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.acz 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9f04a9ea2d36ca7bc2119a658658f1c03e40ef1c3e64c71 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XAv 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9f04a9ea2d36ca7bc2119a658658f1c03e40ef1c3e64c71 0 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9f04a9ea2d36ca7bc2119a658658f1c03e40ef1c3e64c71 0 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9f04a9ea2d36ca7bc2119a658658f1c03e40ef1c3e64c71 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XAv 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XAv 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XAv 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=72686a6e11b79362d29144a5be212ed445836cc58bd43fbc 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.V1q 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 72686a6e11b79362d29144a5be212ed445836cc58bd43fbc 2 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 72686a6e11b79362d29144a5be212ed445836cc58bd43fbc 2 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=72686a6e11b79362d29144a5be212ed445836cc58bd43fbc 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.V1q 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.V1q 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.V1q 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccec95aaf89f5075725ce1ea197056fe 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zSE 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccec95aaf89f5075725ce1ea197056fe 1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccec95aaf89f5075725ce1ea197056fe 1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccec95aaf89f5075725ce1ea197056fe 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:09.409 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zSE 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zSE 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zSE 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f5e938094683213b1904e48ee97d964 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ERe 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f5e938094683213b1904e48ee97d964 1 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f5e938094683213b1904e48ee97d964 1 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f5e938094683213b1904e48ee97d964 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ERe 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ERe 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ERe 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dc7944167b3434df401ba97c148607e473337cbad46ad5c0 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tvk 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dc7944167b3434df401ba97c148607e473337cbad46ad5c0 2 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dc7944167b3434df401ba97c148607e473337cbad46ad5c0 2 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.668 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dc7944167b3434df401ba97c148607e473337cbad46ad5c0 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tvk 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tvk 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Tvk 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5850f1e5eed298c3e62aff9aeae22c58 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KXH 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5850f1e5eed298c3e62aff9aeae22c58 0 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5850f1e5eed298c3e62aff9aeae22c58 0 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5850f1e5eed298c3e62aff9aeae22c58 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KXH 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KXH 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KXH 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f901baeb045358acad96b544034257957598922d4466023f3717b001231765ea 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CYW 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f901baeb045358acad96b544034257957598922d4466023f3717b001231765ea 3 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f901baeb045358acad96b544034257957598922d4466023f3717b001231765ea 3 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f901baeb045358acad96b544034257957598922d4466023f3717b001231765ea 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CYW 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CYW 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CYW 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2475 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2475 ']' 00:25:09.669 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eYk 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.acz ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.acz 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XAv 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.V1q ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.V1q 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zSE 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ERe ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ERe 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Tvk 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KXH ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KXH 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CYW 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.928 17:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:10.186 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:10.187 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:10.187 17:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:12.722 Waiting for block devices as requested 00:25:12.722 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.722 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.722 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.722 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.981 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.981 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.981 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.240 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:13.240 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.240 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.240 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.499 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.499 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:14.326 17:33:32 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:14.326 No valid GPT data, bailing 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:14.326 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:14.584 00:25:14.584 Discovery Log Number of Records 2, Generation counter 2 00:25:14.584 =====Discovery Log Entry 0====== 00:25:14.584 trtype: tcp 00:25:14.584 adrfam: ipv4 00:25:14.584 subtype: current discovery subsystem 00:25:14.585 treq: not specified, sq flow control disable supported 00:25:14.585 portid: 1 00:25:14.585 trsvcid: 4420 00:25:14.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:14.585 traddr: 10.0.0.1 00:25:14.585 eflags: none 00:25:14.585 sectype: none 00:25:14.585 =====Discovery Log Entry 1====== 00:25:14.585 trtype: tcp 00:25:14.585 adrfam: ipv4 00:25:14.585 subtype: nvme subsystem 00:25:14.585 treq: not specified, sq flow control disable supported 00:25:14.585 portid: 1 00:25:14.585 trsvcid: 4420 00:25:14.585 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:14.585 traddr: 10.0.0.1 00:25:14.585 eflags: none 00:25:14.585 sectype: none 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.585 nvme0n1 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.585 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.844 nvme0n1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.844 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 nvme0n1 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.103 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.362 nvme0n1 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.362 17:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.362 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.363 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 nvme0n1 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.622 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 nvme0n1 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 nvme0n1 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.882 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.141 nvme0n1 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.141 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 17:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 nvme0n1 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.401 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 nvme0n1 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.660 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.661 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 nvme0n1 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.920 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.921 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.921 17:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.921 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.921 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.921 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.180 nvme0n1 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.180 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.440 17:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.440 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.700 nvme0n1 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.700 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.701 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.960 nvme0n1 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.960 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.220 nvme0n1 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.220 17:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 nvme0n1 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.479 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.739 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.997 nvme0n1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.997 17:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.566 nvme0n1 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.566 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.831 nvme0n1 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.831 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.090 17:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.349 nvme0n1 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.349 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.915 nvme0n1 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.915 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.916 17:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.484 nvme0n1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.484 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.051 nvme0n1 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.051 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.307 17:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.874 nvme0n1 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:22.874 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.875 17:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.444 nvme0n1 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.444 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.037 nvme0n1 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.037 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.294 nvme0n1 00:25:24.294 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.295 17:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.295 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.552 nvme0n1 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.552 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.812 nvme0n1 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.812 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.071 nvme0n1 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.071 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.072 nvme0n1 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.072 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.330 17:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.330 nvme0n1 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.330 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.589 nvme0n1 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.589 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.849 nvme0n1 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.849 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.108 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 nvme0n1 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:26.109 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.368 nvme0n1 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.368 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.625 nvme0n1 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.625 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.882 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.140 nvme0n1 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.140 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.141 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 nvme0n1 00:25:27.399 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.399 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.399 17:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.399 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.399 17:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.399 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.657 nvme0n1 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.658 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 nvme0n1 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.917 17:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.918 17:33:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.918 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.918 17:33:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.487 nvme0n1 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.487 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.488 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.747 nvme0n1 00:25:28.747 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.747 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.747 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.747 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.747 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.007 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.267 nvme0n1 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.267 17:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.267 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.836 nvme0n1 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.836 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.096 nvme0n1 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.096 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.355 17:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 nvme0n1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.924 17:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.492 nvme0n1 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.492 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.493 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.060 nvme0n1 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.060 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.319 17:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.887 nvme0n1 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.887 17:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.455 nvme0n1 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.455 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 nvme0n1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.715 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 nvme0n1 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.975 nvme0n1 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.975 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.236 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.237 nvme0n1 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.237 17:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.237 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.237 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.237 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 nvme0n1 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.496 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.754 nvme0n1 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:34.754 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.755 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 nvme0n1 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.013 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.271 nvme0n1 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.271 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.272 17:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.530 nvme0n1 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.530 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 nvme0n1 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:35.789 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.790 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.049 nvme0n1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.049 17:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.307 nvme0n1 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.307 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.566 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.567 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.567 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.567 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.567 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.826 nvme0n1 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.826 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 nvme0n1 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.085 17:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.344 nvme0n1 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.344 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.911 nvme0n1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.912 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 nvme0n1 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.171 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.172 17:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 nvme0n1 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.770 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.771 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.038 nvme0n1 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.038 17:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.605 nvme0n1 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODk4NzAwODgyNGQ2MzVhYmM3NzZiNDM1OTQ2MDk4NjJgTq1l: 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: ]] 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGQxNGQxM2Y2ZTg2NThkNDgzOTliZTkwY2U3YTM0ZWRiMjdhNDkxZDg3MDk5NDU0ZTMyMjFmZjQ2YTNiNzVmYYpTXKA=: 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.605 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.606 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 nvme0n1 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.172 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.173 17:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 nvme0n1 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlYzk1YWFmODlmNTA3NTcyNWNlMWVhMTk3MDU2ZmXn5+NF: 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWY1ZTkzODA5NDY4MzIxM2IxOTA0ZTQ4ZWU5N2Q5NjQYwmdk: 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.741 17:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.307 nvme0n1 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.307 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGM3OTQ0MTY3YjM0MzRkZjQwMWJhOTdjMTQ4NjA3ZTQ3MzMzN2NiYWQ0NmFkNWMw3ZE5iA==: 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTg1MGYxZTVlZWQyOThjM2U2MmFmZjlhZWFlMjJjNThqdugl: 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.566 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.131 nvme0n1 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjkwMWJhZWIwNDUzNThhY2FkOTZiNTQ0MDM0MjU3OTU3NTk4OTIyZDQ0NjYwMjNmMzcxN2IwMDEyMzE3NjVlYZo6Kxs=: 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.131 17:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 nvme0n1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjlmMDRhOWVhMmQzNmNhN2JjMjExOWE2NTg2NThmMWMwM2U0MGVmMWMzZTY0YzcxIAvHag==: 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI2ODZhNmUxMWI3OTM2MmQyOTE0NGE1YmUyMTJlZDQ0NTgzNmNjNThiZDQzZmJjO12JmA==: 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 request: 00:25:42.698 { 00:25:42.698 "name": "nvme0", 00:25:42.698 "trtype": "tcp", 00:25:42.698 "traddr": "10.0.0.1", 00:25:42.698 "adrfam": "ipv4", 00:25:42.698 "trsvcid": "4420", 00:25:42.698 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.698 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.698 "prchk_reftag": false, 00:25:42.698 "prchk_guard": false, 00:25:42.698 "hdgst": false, 00:25:42.698 "ddgst": false, 00:25:42.698 "method": "bdev_nvme_attach_controller", 00:25:42.698 "req_id": 1 00:25:42.698 } 00:25:42.698 Got JSON-RPC error response 00:25:42.698 response: 00:25:42.698 { 00:25:42.698 "code": -5, 00:25:42.698 "message": "Input/output error" 00:25:42.698 } 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.698 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.699 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:42.699 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.699 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.957 request: 00:25:42.957 { 00:25:42.957 "name": "nvme0", 00:25:42.957 "trtype": "tcp", 00:25:42.957 "traddr": "10.0.0.1", 00:25:42.957 "adrfam": "ipv4", 00:25:42.957 "trsvcid": "4420", 00:25:42.957 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.957 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.957 "prchk_reftag": false, 00:25:42.957 "prchk_guard": false, 00:25:42.957 "hdgst": false, 00:25:42.957 "ddgst": false, 00:25:42.957 "dhchap_key": "key2", 00:25:42.957 "method": "bdev_nvme_attach_controller", 00:25:42.957 "req_id": 1 00:25:42.957 } 00:25:42.957 Got JSON-RPC error response 00:25:42.957 response: 00:25:42.957 { 00:25:42.957 "code": -5, 00:25:42.957 "message": "Input/output error" 00:25:42.957 } 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.957 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.957 request: 00:25:42.957 { 00:25:42.957 "name": "nvme0", 00:25:42.957 "trtype": "tcp", 00:25:42.957 "traddr": "10.0.0.1", 00:25:42.957 "adrfam": "ipv4", 00:25:42.957 "trsvcid": "4420", 00:25:42.957 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:42.957 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:42.957 "prchk_reftag": false, 00:25:42.957 "prchk_guard": false, 00:25:42.957 "hdgst": false, 00:25:42.957 "ddgst": false, 00:25:42.957 "dhchap_key": "key1", 00:25:42.957 "dhchap_ctrlr_key": "ckey2", 00:25:42.958 "method": "bdev_nvme_attach_controller", 00:25:42.958 "req_id": 1 00:25:42.958 } 00:25:42.958 Got JSON-RPC error response 00:25:42.958 response: 00:25:42.958 { 00:25:42.958 "code": -5, 00:25:42.958 "message": "Input/output error" 00:25:42.958 } 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.958 rmmod nvme_tcp 00:25:42.958 rmmod nvme_fabrics 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2475 ']' 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2475 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2475 ']' 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2475 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.958 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2475 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2475' 00:25:43.216 killing process with pid 2475 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2475 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2475 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.216 17:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.749 17:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:45.749 17:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.651 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.651 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.588 17:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eYk /tmp/spdk.key-null.XAv /tmp/spdk.key-sha256.zSE /tmp/spdk.key-sha384.Tvk /tmp/spdk.key-sha512.CYW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:48.588 17:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:51.124 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:51.124 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:51.124 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:51.124 00:25:51.124 real 0m48.157s 00:25:51.124 user 0m43.222s 00:25:51.124 sys 0m11.142s 00:25:51.124 17:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.124 17:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.124 ************************************ 00:25:51.124 END TEST nvmf_auth_host 00:25:51.124 ************************************ 00:25:51.124 17:34:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:51.124 17:34:09 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:51.124 17:34:09 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.124 17:34:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:51.124 17:34:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.124 17:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.124 ************************************ 00:25:51.124 START TEST nvmf_digest 00:25:51.124 ************************************ 00:25:51.124 17:34:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:51.383 * Looking for test storage... 00:25:51.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.383 17:34:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.655 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:56.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:56.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:56.656 Found net devices under 0000:86:00.0: cvl_0_0 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:56.656 Found net devices under 0000:86:00.1: cvl_0_1 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:56.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:25:56.656 00:25:56.656 --- 10.0.0.2 ping statistics --- 00:25:56.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.656 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:25:56.656 00:25:56.656 --- 10.0.0.1 ping statistics --- 00:25:56.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.656 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 ************************************ 00:25:56.656 START TEST nvmf_digest_clean 00:25:56.656 ************************************ 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=15701 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 15701 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 15701 ']' 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:56.656 17:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 [2024-07-12 17:34:15.026754] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:25:56.656 [2024-07-12 17:34:15.026792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.656 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.656 [2024-07-12 17:34:15.082297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.656 [2024-07-12 17:34:15.161001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.656 [2024-07-12 17:34:15.161036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.656 [2024-07-12 17:34:15.161043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.656 [2024-07-12 17:34:15.161049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.656 [2024-07-12 17:34:15.161054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.656 [2024-07-12 17:34:15.161071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.223 null0 00:25:57.223 [2024-07-12 17:34:15.945406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.223 [2024-07-12 17:34:15.969575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=15833 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 15833 /var/tmp/bperf.sock 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 15833 ']' 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.223 17:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:57.223 [2024-07-12 17:34:16.001868] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:25:57.223 [2024-07-12 17:34:16.001913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15833 ] 00:25:57.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.481 [2024-07-12 17:34:16.058503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.481 [2024-07-12 17:34:16.137724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.047 17:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.047 17:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:58.047 17:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:58.047 17:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:58.047 17:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:58.305 17:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.305 17:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.564 nvme0n1 00:25:58.564 17:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:58.564 17:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.822 Running I/O for 2 seconds... 00:26:00.724 00:26:00.724 Latency(us) 00:26:00.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.724 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:00.724 nvme0n1 : 2.01 25584.70 99.94 0.00 0.00 4996.55 2550.21 11625.52 00:26:00.724 =================================================================================================================== 00:26:00.724 Total : 25584.70 99.94 0.00 0.00 4996.55 2550.21 11625.52 00:26:00.724 0 00:26:00.724 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:00.724 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:00.724 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:00.724 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:00.724 | select(.opcode=="crc32c") 00:26:00.724 | "\(.module_name) \(.executed)"' 00:26:00.724 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 15833 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 15833 ']' 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 15833 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 15833 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 15833' 00:26:00.983 killing process with pid 15833 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 15833 00:26:00.983 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.983 00:26:00.983 Latency(us) 00:26:00.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.983 =================================================================================================================== 00:26:00.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.983 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 15833 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:01.241 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=16423 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 16423 /var/tmp/bperf.sock 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 16423 ']' 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.242 17:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:01.242 [2024-07-12 17:34:19.836041] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:01.242 [2024-07-12 17:34:19.836091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid16423 ] 00:26:01.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.242 Zero copy mechanism will not be used. 00:26:01.242 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.242 [2024-07-12 17:34:19.890944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.242 [2024-07-12 17:34:19.958581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:02.176 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.177 17:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.449 nvme0n1 00:26:02.449 17:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:02.449 17:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.720 Zero copy mechanism will not be used. 00:26:02.720 Running I/O for 2 seconds... 00:26:04.626 00:26:04.626 Latency(us) 00:26:04.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:04.626 nvme0n1 : 2.00 5188.98 648.62 0.00 0.00 3080.45 705.22 7208.96 00:26:04.626 =================================================================================================================== 00:26:04.626 Total : 5188.98 648.62 0.00 0.00 3080.45 705.22 7208.96 00:26:04.626 0 00:26:04.626 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:04.626 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:04.626 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:04.626 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:04.626 | select(.opcode=="crc32c") 00:26:04.626 | "\(.module_name) \(.executed)"' 00:26:04.626 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 16423 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 16423 ']' 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 16423 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 16423 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 16423' 00:26:04.885 killing process with pid 16423 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 16423 00:26:04.885 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.885 00:26:04.885 Latency(us) 00:26:04.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.885 =================================================================================================================== 00:26:04.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 16423 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=17122 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 17122 /var/tmp/bperf.sock 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 17122 ']' 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.885 17:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.144 [2024-07-12 17:34:23.692251] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:05.144 [2024-07-12 17:34:23.692299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17122 ] 00:26:05.144 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.144 [2024-07-12 17:34:23.746204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.144 [2024-07-12 17:34:23.817826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.078 17:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.336 nvme0n1 00:26:06.336 17:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.336 17:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.595 Running I/O for 2 seconds... 00:26:08.496 00:26:08.496 Latency(us) 00:26:08.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:08.497 nvme0n1 : 2.00 27261.41 106.49 0.00 0.00 4687.31 2108.55 6639.08 00:26:08.497 =================================================================================================================== 00:26:08.497 Total : 27261.41 106.49 0.00 0.00 4687.31 2108.55 6639.08 00:26:08.497 0 00:26:08.497 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:08.497 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:08.497 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:08.497 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:08.497 | select(.opcode=="crc32c") 00:26:08.497 | "\(.module_name) \(.executed)"' 00:26:08.497 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 17122 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 17122 ']' 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 17122 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 17122 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 17122' 00:26:08.754 killing process with pid 17122 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 17122 00:26:08.754 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.754 00:26:08.754 Latency(us) 00:26:08.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.754 =================================================================================================================== 00:26:08.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.754 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 17122 00:26:09.011 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:09.011 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.011 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=17806 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 17806 /var/tmp/bperf.sock 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 17806 ']' 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.012 17:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.012 [2024-07-12 17:34:27.638011] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:09.012 [2024-07-12 17:34:27.638068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17806 ] 00:26:09.012 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.012 Zero copy mechanism will not be used. 00:26:09.012 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.012 [2024-07-12 17:34:27.692219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.012 [2024-07-12 17:34:27.761131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.946 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.204 nvme0n1 00:26:10.463 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.463 17:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.463 Zero copy mechanism will not be used. 00:26:10.463 Running I/O for 2 seconds... 00:26:12.369 00:26:12.369 Latency(us) 00:26:12.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.369 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:12.369 nvme0n1 : 2.00 5967.54 745.94 0.00 0.00 2676.16 1410.45 4673.00 00:26:12.369 =================================================================================================================== 00:26:12.369 Total : 5967.54 745.94 0.00 0.00 2676.16 1410.45 4673.00 00:26:12.369 0 00:26:12.369 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:12.369 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:12.369 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:12.369 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:12.369 | select(.opcode=="crc32c") 00:26:12.369 | "\(.module_name) \(.executed)"' 00:26:12.369 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 17806 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 17806 ']' 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 17806 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 17806 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 17806' 00:26:12.628 killing process with pid 17806 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 17806 00:26:12.628 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.628 00:26:12.628 Latency(us) 00:26:12.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.628 =================================================================================================================== 00:26:12.628 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.628 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 17806 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 15701 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 15701 ']' 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 15701 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 15701 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 15701' 00:26:12.887 killing process with pid 15701 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 15701 00:26:12.887 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 15701 00:26:13.146 00:26:13.146 real 0m16.760s 00:26:13.146 user 0m32.196s 00:26:13.146 sys 0m4.371s 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 ************************************ 00:26:13.146 END TEST nvmf_digest_clean 00:26:13.146 ************************************ 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 ************************************ 00:26:13.146 START TEST nvmf_digest_error 00:26:13.146 ************************************ 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=18486 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 18486 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 18486 ']' 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.146 17:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.146 [2024-07-12 17:34:31.842889] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:13.146 [2024-07-12 17:34:31.842925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.146 [2024-07-12 17:34:31.899879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.405 [2024-07-12 17:34:31.978042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.405 [2024-07-12 17:34:31.978082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.405 [2024-07-12 17:34:31.978089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.405 [2024-07-12 17:34:31.978095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.405 [2024-07-12 17:34:31.978100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.405 [2024-07-12 17:34:31.978140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:13.972 [2024-07-12 17:34:32.696232] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.972 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.234 null0 00:26:14.234 [2024-07-12 17:34:32.785777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.234 [2024-07-12 17:34:32.809931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=18574 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 18574 /var/tmp/bperf.sock 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 18574 ']' 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.234 17:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:14.234 [2024-07-12 17:34:32.857964] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:14.234 [2024-07-12 17:34:32.858005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18574 ] 00:26:14.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.234 [2024-07-12 17:34:32.910801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.234 [2024-07-12 17:34:32.983495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.177 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.178 17:34:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.437 nvme0n1 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:15.437 17:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.437 Running I/O for 2 seconds... 00:26:15.437 [2024-07-12 17:34:34.213819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.437 [2024-07-12 17:34:34.213852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.437 [2024-07-12 17:34:34.213863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.224004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.224029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.224038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.231981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.232002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.232010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.243417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.243439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.243447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.254684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.254705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.254714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.263382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.263417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.263425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.273446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.273473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.283769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.283789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.283797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.294253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.294274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.294282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.302875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.302894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.302902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.312124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.312143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.312151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.323404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.323423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.323431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.334759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.334782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.334789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.343321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.343342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.343350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.356018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.356039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.356047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.368559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.368580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.380394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.380427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.392565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.392586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.392595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.403861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.403881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.403889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.411793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.411813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.411822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.423222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.423243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.423251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.433616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.433638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.433645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.445760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.445780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.445788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-12 17:34:34.454449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.697 [2024-07-12 17:34:34.454469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-12 17:34:34.454477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.698 [2024-07-12 17:34:34.466184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.698 [2024-07-12 17:34:34.466205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.698 [2024-07-12 17:34:34.466212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.477417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.477442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.477450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.486311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.486331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.486339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.497664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.497684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.497692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.507780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.507801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.507809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.518492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.518512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.518520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.526888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.526908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.526916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.537731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.537751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.537759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.546425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.546444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.546452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.556456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.957 [2024-07-12 17:34:34.556484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.957 [2024-07-12 17:34:34.567905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.957 [2024-07-12 17:34:34.567926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.567934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.577613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.577633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.577640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.586693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.586714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.586721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.597707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.597727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.597734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.607840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.607860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.607868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.616455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.616475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.628123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.628144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.628151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.636402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.636421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.636429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.647970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.647991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.648003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.656893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.656912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.656921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.666335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.666355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.666363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.675710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.675729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.685389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.685409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.685416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.694102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.694121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.694129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.705418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.705439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.705447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.715317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.715337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.715344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.724330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.724350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.724357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.958 [2024-07-12 17:34:34.733092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:15.958 [2024-07-12 17:34:34.733117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.958 [2024-07-12 17:34:34.733125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.743823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.743844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.753918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.753937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.762713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.762733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.762741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.772893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.772913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.772921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.782745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.782765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.782773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.791723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.791743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.791750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.803189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.803208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.803216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.814466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.814485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.814492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.823272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.823290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.823297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.834150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.834171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.834178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.844056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.844081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.852182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.852201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.852208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.861953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.218 [2024-07-12 17:34:34.861972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.218 [2024-07-12 17:34:34.861980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.218 [2024-07-12 17:34:34.872145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.872164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.872172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.880408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.880428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.880435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.890389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.890409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.890417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.901245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.901265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.901276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.910585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.910604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.910613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.920991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.921011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.921020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.929306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.929326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.929334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.941080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.941101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.941109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.951093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.951113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.951122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.959794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.959814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.959821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.969343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.969363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.969370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.978592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.978612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.978620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.219 [2024-07-12 17:34:34.988967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.219 [2024-07-12 17:34:34.988988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.219 [2024-07-12 17:34:34.988995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.478 [2024-07-12 17:34:34.998991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.478 [2024-07-12 17:34:34.999011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.478 [2024-07-12 17:34:34.999019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.478 [2024-07-12 17:34:35.007685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.478 [2024-07-12 17:34:35.007705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.478 [2024-07-12 17:34:35.007713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.018412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.018432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.018440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.029511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.029531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.029539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.038722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.038741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.038749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.048660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.048681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.048689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.058826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.058845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.071480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.071498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.071510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.083912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.083931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.083938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.096521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.096541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.096548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.104809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.104829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.116768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.116795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.127953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.127972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.127980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.140496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.140515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.140523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.148819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.148838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.148846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.161212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.161231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.161239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.174217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.174248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.182770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.182790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.182799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.193348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.193366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.193374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.202132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.202152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.202160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.212427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.212447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.212454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.222117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.222139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.222147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.231780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.231801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.231808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.240443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.240463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.240471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.479 [2024-07-12 17:34:35.251597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.479 [2024-07-12 17:34:35.251616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.479 [2024-07-12 17:34:35.251624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.261229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.261249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.261257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.270534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.270562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.280430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.280449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.280457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.288915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.288935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.288943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.300659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.300687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.309006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.309025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.320054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.320081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.330751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.330771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.330778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.342537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.342556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.342567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.353161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.353180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.353187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.361399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.361419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.371696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.371715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.381622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.381641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.381648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.390617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.390635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.390643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.401599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.401618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.401626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.409925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.409944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.409952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.422146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.422165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.739 [2024-07-12 17:34:35.422173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.739 [2024-07-12 17:34:35.429862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.739 [2024-07-12 17:34:35.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.429892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.439674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.439693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.439701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.450242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.450261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.458542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.458561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.458569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.469591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.469611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.469618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.480696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.480716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.480724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.489616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.489635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.489642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.498707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.498726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.498734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.740 [2024-07-12 17:34:35.509323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.740 [2024-07-12 17:34:35.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.740 [2024-07-12 17:34:35.509350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.999 [2024-07-12 17:34:35.517876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:16.999 [2024-07-12 17:34:35.517896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.999 [2024-07-12 17:34:35.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.999 [2024-07-12 17:34:35.527656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.527676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.536908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.536928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.536935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.547409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.547429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.547436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.555954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.555974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.555981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.566667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.566686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.566694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.576221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.576240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.576247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.585614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.585633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.585640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.594862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.594881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.594893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.604010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.604030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.604037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.615181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.615200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.615208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.623615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.623635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.623642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.635418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.635437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.635445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.643505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.643524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.643532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.654503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.654522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.654530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.663989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.664009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.664016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.674620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.674639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.674646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.684040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.684059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.684066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.693001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.693021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.693029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.704564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.704583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.704591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.713894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.713913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.713921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.722483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.722502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.722510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.732744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.732763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.000 [2024-07-12 17:34:35.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.000 [2024-07-12 17:34:35.742017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.000 [2024-07-12 17:34:35.742036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.001 [2024-07-12 17:34:35.742044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.001 [2024-07-12 17:34:35.751814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.001 [2024-07-12 17:34:35.751833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.001 [2024-07-12 17:34:35.751841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.001 [2024-07-12 17:34:35.764444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.001 [2024-07-12 17:34:35.764464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.001 [2024-07-12 17:34:35.764475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.001 [2024-07-12 17:34:35.772576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.001 [2024-07-12 17:34:35.772595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.001 [2024-07-12 17:34:35.772602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.260 [2024-07-12 17:34:35.785124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.260 [2024-07-12 17:34:35.785145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.260 [2024-07-12 17:34:35.785153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.260 [2024-07-12 17:34:35.793724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.260 [2024-07-12 17:34:35.793744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.260 [2024-07-12 17:34:35.793752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.260 [2024-07-12 17:34:35.805303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.260 [2024-07-12 17:34:35.805322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.260 [2024-07-12 17:34:35.805330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.816247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.816268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.824727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.824748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.824755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.836389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.836410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.836419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.844946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.844967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.844975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.856351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.856391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.867509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.867529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.867537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.875797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.875816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.875824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.886906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.886926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.886933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.899014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.899033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.899041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.907575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.907594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.907602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.918369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.918395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.918403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.926601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.926620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.926627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.936914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.936933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.936941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.946029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.946049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.946057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.957051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.957071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.957078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.968054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.968074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.968081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.976362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.976389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.976397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:35.988973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:35.988993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:35.989000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:36.001220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:36.001240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:36.001247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:36.014700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:36.014721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:36.014729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:36.027461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:36.027483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:36.027491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.261 [2024-07-12 17:34:36.038318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.261 [2024-07-12 17:34:36.038337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.261 [2024-07-12 17:34:36.038350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.521 [2024-07-12 17:34:36.050786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.521 [2024-07-12 17:34:36.050805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.521 [2024-07-12 17:34:36.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.521 [2024-07-12 17:34:36.063548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.521 [2024-07-12 17:34:36.063572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.073343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.073363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.073371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.081824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.081843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.081850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.092535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.092555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.092563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.103121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.103140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.103148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.111252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.111272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.111279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.122735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.122755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.122763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.131286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.131310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.131317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.143755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.143776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.143783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.155482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.155502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.155509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.164117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.164137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.164144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.176554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.176582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.187637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.187657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.187665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 [2024-07-12 17:34:36.200597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf0ff20) 00:26:17.522 [2024-07-12 17:34:36.200617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.522 [2024-07-12 17:34:36.200625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.522 00:26:17.522 Latency(us) 00:26:17.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.522 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:17.522 nvme0n1 : 2.00 25052.48 97.86 0.00 0.00 5104.65 2564.45 18350.08 00:26:17.522 =================================================================================================================== 00:26:17.522 Total : 25052.48 97.86 0.00 0.00 5104.65 2564.45 18350.08 00:26:17.522 0 00:26:17.522 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:17.522 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:17.522 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:17.522 | .driver_specific 00:26:17.522 | .nvme_error 00:26:17.522 | .status_code 00:26:17.522 | .command_transient_transport_error' 00:26:17.522 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 18574 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 18574 ']' 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 18574 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18574 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18574' 00:26:17.781 killing process with pid 18574 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 18574 00:26:17.781 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.781 00:26:17.781 Latency(us) 00:26:17.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.781 =================================================================================================================== 00:26:17.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.781 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 18574 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=19274 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 19274 /var/tmp/bperf.sock 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 19274 ']' 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.041 17:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.041 [2024-07-12 17:34:36.678411] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:18.041 [2024-07-12 17:34:36.678458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19274 ] 00:26:18.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.041 Zero copy mechanism will not be used. 00:26:18.041 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.041 [2024-07-12 17:34:36.732382] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.041 [2024-07-12 17:34:36.803790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.979 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.239 nvme0n1 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:19.239 17:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.499 Zero copy mechanism will not be used. 00:26:19.499 Running I/O for 2 seconds... 00:26:19.499 [2024-07-12 17:34:38.067650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.067683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.067694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.075015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.075038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.075048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.081793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.081825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.088440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.088459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.088468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.095041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.095060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.095068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.101358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.101384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.101392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.107721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.107741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.107749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.114120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.114140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.114148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.120248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.120268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.125718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.499 [2024-07-12 17:34:38.125739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.499 [2024-07-12 17:34:38.125746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.499 [2024-07-12 17:34:38.131413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.131434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.131441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.137375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.137406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.137414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.143506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.143527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.143535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.150218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.150240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.150248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.156751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.156772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.156780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.162958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.162979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.162987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.169299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.169320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.169329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.176015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.176037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.176046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.183070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.183092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.183101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.190807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.190836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.199574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.199597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.199605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.208280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.208303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.208311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.216653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.216674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.216682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.223298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.223319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.223327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.230151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.230171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.230179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.236758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.236780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.236787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.243312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.243333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.243340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.249777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.249798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.249805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.256464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.256485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.256496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.263050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.263070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.263078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.269450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.269471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.269479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.500 [2024-07-12 17:34:38.275743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.500 [2024-07-12 17:34:38.275766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.500 [2024-07-12 17:34:38.275775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.281763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.281785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.281793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.287702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.287723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.287731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.293302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.293322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.293330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.299018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.299038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.299046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.304768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.304788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.304796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.310494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.310518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.316698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.316720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.316728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.323468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.323489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.330698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.330720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.330728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.337255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.337276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.337283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.340549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.340569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.340576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.346488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.346508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.346516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.351829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.351850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.351858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.357560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.357580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.357588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.363213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.761 [2024-07-12 17:34:38.363233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.761 [2024-07-12 17:34:38.363241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.761 [2024-07-12 17:34:38.368736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.368755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.374200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.374221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.374229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.379672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.379693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.379701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.385210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.385230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.385238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.390682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.390702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.396242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.396263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.401743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.401763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.401771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.407201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.407224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.407232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.412808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.412828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.412836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.418402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.418422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.424052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.424073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.424080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.429657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.429677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.429685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.434875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.434896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.434905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.441098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.441127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.447513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.447534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.453188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.453216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.458892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.458913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.458921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.464313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.464333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.464341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.469759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.469778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.469786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.475230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.475250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.480711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.480731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.480739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.486361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.486389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.486397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.491959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.491979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.491987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.497610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.497630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.497638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.503199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.503219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.503230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.508690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.508711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.508719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.516140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.516161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.516168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.523296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.523317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.523326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.530622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.762 [2024-07-12 17:34:38.530642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.762 [2024-07-12 17:34:38.530650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.762 [2024-07-12 17:34:38.537194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:19.763 [2024-07-12 17:34:38.537216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.763 [2024-07-12 17:34:38.537224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.543664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.543686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.543694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.549853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.549873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.549882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.556065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.556085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.556093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.562906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.562930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.562938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.570028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.570048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.570056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.576496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.576517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.576525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.583427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.583448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.583456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.590630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.590650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.597181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.597202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.597210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.603677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.603698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.603705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.609970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.609990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.609998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.617174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.617195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.617202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.624284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.624304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.624312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.631531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.631551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.631559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.637704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.637725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.637733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.643930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.643950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.643957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.650053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.650074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.650082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.656249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.656269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.656277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.662286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.662307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.662314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.023 [2024-07-12 17:34:38.668105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.023 [2024-07-12 17:34:38.668126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.023 [2024-07-12 17:34:38.668133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.674013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.674035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.674048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.679878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.679898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.679906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.685596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.685617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.685625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.691233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.691255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.691263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.697084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.697105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.697113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.702883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.702904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.702913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.708363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.708392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.708400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.714045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.714066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.714074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.719903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.719923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.719932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.725431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.725455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.725462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.730915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.730936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.730944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.736419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.736439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.736447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.742253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.742274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.742281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.747884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.747904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.747912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.753398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.753419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.753426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.758931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.758952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.758960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.764374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.764401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.764408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.770025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.770046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.770057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.775592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.775612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.775620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.781310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.781331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.781339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.786860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.786881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.786889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.792422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.792450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.024 [2024-07-12 17:34:38.798131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.024 [2024-07-12 17:34:38.798151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.024 [2024-07-12 17:34:38.798159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.803854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.803875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.803883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.809467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.809488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.809496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.814997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.815018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.815025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.820675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.820699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.820706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.826360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.826388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.826397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.831963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.831984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.831992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.837672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.837695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.837703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.843360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.843390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.843399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.849080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.849102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.849110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.854686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.854714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.860159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.860179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.860187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.865703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.865725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.865733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.871425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.871447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.871454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.877139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.877161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.877168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.882747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.882768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.882777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.888371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.888397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.888405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.893866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.893886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.899519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.899539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.899547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.905226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.905246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.910767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.315 [2024-07-12 17:34:38.910788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.315 [2024-07-12 17:34:38.910796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.315 [2024-07-12 17:34:38.916736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.916757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.916769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.923697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.923718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.923726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.931181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.931204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.931212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.938633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.938654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.938662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.947065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.947088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.947096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.955864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.955886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.955894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.965154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.965177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.965185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.974185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.974216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.983166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.983196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:38.992370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:38.992401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:38.992410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.000509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.000532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.000541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.009682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.009705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.009713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.018479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.018502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.018510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.027555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.027576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.027584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.036476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.036499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.036507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.045309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.045331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.053843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.053864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.053873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.316 [2024-07-12 17:34:39.061981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.316 [2024-07-12 17:34:39.062004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.316 [2024-07-12 17:34:39.062013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.070146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.070169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.070178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.078782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.078805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.078814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.088046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.088068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.088077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.096488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.096510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.096518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.105263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.105285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.105293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.113723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.113744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.113752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.121361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.121395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.128571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.128591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.128599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.135814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.135834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.135846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.142940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.142961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.142968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.149554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.149583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.157075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.157096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.157104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.164414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.164435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.164443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.171284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.171304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.578 [2024-07-12 17:34:39.171312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.578 [2024-07-12 17:34:39.177627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.578 [2024-07-12 17:34:39.177647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.177655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.184009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.184036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.190498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.190519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.190526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.195791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.195811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.195820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.199449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.199469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.199477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.205429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.205449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.211188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.211208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.211216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.216710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.216729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.216737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.222199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.222219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.222227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.227657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.227676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.227684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.233004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.233024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.233032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.238386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.238406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.238418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.243736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.243757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.243765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.249374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.249399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.255713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.255740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.262856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.262875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.262883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.269566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.269586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.269594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.276302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.276323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.276330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.282326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.282347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.282354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.288274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.288294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.288302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.293989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.294014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.294023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.299810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.299831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.579 [2024-07-12 17:34:39.299839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.579 [2024-07-12 17:34:39.306343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.579 [2024-07-12 17:34:39.306364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.306372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.313713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.313734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.313742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.320476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.320496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.327148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.327169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.327177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.333537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.333557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.333565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.339916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.339937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.339945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.346317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.346345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.580 [2024-07-12 17:34:39.352784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.580 [2024-07-12 17:34:39.352805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.580 [2024-07-12 17:34:39.352813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.359752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.359774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.359782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.366730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.366750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.366758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.373743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.373763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.373771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.380973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.380993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.381001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.388121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.388140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.388148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.394760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.394781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.394788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.401091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.401111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.407985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.408004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.408015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.415345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.415364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.421777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.421804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.428169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.428188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.428196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.434844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.434863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.434871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.442099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.442119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.442127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.448964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.448983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.448991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.455511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.455530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.455538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.462597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.462617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.462625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.469705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.469729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.469737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.476350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.476371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.476384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.839 [2024-07-12 17:34:39.482285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.839 [2024-07-12 17:34:39.482305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.839 [2024-07-12 17:34:39.482313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.488445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.488464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.488479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.494270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.494298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.500159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.500180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.500188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.506276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.506295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.506303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.512010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.512031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.512039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.517887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.517916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.523592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.523621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.529498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.529518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.529526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.535248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.535269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.535277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.540859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.540880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.540888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.546525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.546545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.546553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.552970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.552999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.560106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.560127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.560135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.568129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.568149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.568157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.575187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.575211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.575219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.581554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.581574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.581581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.587718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.587739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.587747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.593695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.593714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.593722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.599795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.599815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.599823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.605935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.605955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.840 [2024-07-12 17:34:39.612724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:20.840 [2024-07-12 17:34:39.612744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.840 [2024-07-12 17:34:39.612752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.619609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.619630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.619638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.626721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.626742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.626751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.633228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.633250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.641101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.641123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.641132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.648865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.648886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.657714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.657735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.657743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.665920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.665941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.665950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.673213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.673234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.673242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.680066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.680087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.680094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.686934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.686962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.693403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.693435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.700351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.700372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.700385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.706488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.706509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.706517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.712574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.712595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.712603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.718547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.718567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.718574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.724479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.724498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.724506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.730339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.730359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.730367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.736076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.736097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.741777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.741798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.741806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.747528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.747552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.747560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.753195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.753216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.753224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.758906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.758933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.764643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.764664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.764672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.099 [2024-07-12 17:34:39.770417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.099 [2024-07-12 17:34:39.770437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.099 [2024-07-12 17:34:39.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.776067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.776087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.776094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.781692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.781711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.781719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.787491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.787512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.787519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.793209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.793229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.793237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.798869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.798889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.798897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.804648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.804668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.810352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.810372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.810384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.815960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.815980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.815988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.821625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.821653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.827305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.827325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.827332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.832988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.833008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.838736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.838756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.838763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.844550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.844571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.844582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.850337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.850357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.850365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.856154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.856174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.856181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.861921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.861942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.861950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.867679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.867700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.867707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.100 [2024-07-12 17:34:39.873395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.100 [2024-07-12 17:34:39.873416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.100 [2024-07-12 17:34:39.873423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.879044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.879065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.879073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.884625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.884645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.884653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.890277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.890297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.890305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.895837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.895860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.895868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.901388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.901408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.906782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.906802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.906809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.912207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.912227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.912235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.917732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.917752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.917760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.923294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.923314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.923321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.929068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.929088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.929096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.934759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.934779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.934786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.940457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.940477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.940488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.946167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.946187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.946194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.951946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.951967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.951974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.957542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.957562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.957571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.963158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.963178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.963185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.968892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.968912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.968919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.974618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.974638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.980400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.980420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.980428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.986136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.986156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.986164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.991854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.991878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.991886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:39.997617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:39.997637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:39.997644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.003399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.003420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:40.003428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.009195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.009215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:40.009223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.014985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.015005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:40.015013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.021444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.021468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:40.021477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.027961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.027983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.360 [2024-07-12 17:34:40.027991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.360 [2024-07-12 17:34:40.033446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.360 [2024-07-12 17:34:40.033467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.033475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.038710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.038731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.038739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.044042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.044063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.044071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.049348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.049369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.054687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.054708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.054716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.060111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.060131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.060139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.361 [2024-07-12 17:34:40.065459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1edc0b0) 00:26:21.361 [2024-07-12 17:34:40.065479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.361 [2024-07-12 17:34:40.065486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.361 00:26:21.361 Latency(us) 00:26:21.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.361 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:21.361 nvme0n1 : 2.00 4928.99 616.12 0.00 0.00 3242.63 616.18 9346.00 00:26:21.361 =================================================================================================================== 00:26:21.361 Total : 4928.99 616.12 0.00 0.00 3242.63 616.18 9346.00 00:26:21.361 0 00:26:21.361 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:21.361 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:21.361 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:21.361 | .driver_specific 00:26:21.361 | .nvme_error 00:26:21.361 | .status_code 00:26:21.361 | .command_transient_transport_error' 00:26:21.361 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 318 > 0 )) 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 19274 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 19274 ']' 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 19274 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 19274 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 19274' 00:26:21.620 killing process with pid 19274 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 19274 00:26:21.620 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.620 00:26:21.620 Latency(us) 00:26:21.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.620 =================================================================================================================== 00:26:21.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.620 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 19274 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=19978 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 19978 /var/tmp/bperf.sock 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 19978 ']' 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.879 17:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.879 [2024-07-12 17:34:40.546868] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:21.879 [2024-07-12 17:34:40.546913] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19978 ] 00:26:21.879 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.879 [2024-07-12 17:34:40.599709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.137 [2024-07-12 17:34:40.678777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.717 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.717 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:22.717 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.717 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.974 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:22.975 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.975 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.975 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.975 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.975 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.232 nvme0n1 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:23.232 17:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.491 Running I/O for 2 seconds... 00:26:23.491 [2024-07-12 17:34:42.035765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.035949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.035977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.045349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.045533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.045555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.055030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.055223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.055249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.064695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.064868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.064886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.074331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.074529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.074547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.083993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.084168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.084185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.093668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.093858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.103309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.103507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.103526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.112914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.113084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.491 [2024-07-12 17:34:42.113100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.491 [2024-07-12 17:34:42.122472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.491 [2024-07-12 17:34:42.122661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.122678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.132104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.132291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.132309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.141705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.141872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.141889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.151301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.151501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.151518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.160869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.161039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.161059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.170328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.170537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.170556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.179919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.180089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.180106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.189505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.189676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.189692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.199068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.199260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.199286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.208635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.208806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.208823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.218169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.218337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.218353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.227737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.227907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.227924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.237274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.237450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.237468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.246849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.247061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.256416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.256588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.256605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.492 [2024-07-12 17:34:42.265956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.492 [2024-07-12 17:34:42.266144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.492 [2024-07-12 17:34:42.266160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.275722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.285429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.285598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.285615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.295004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.295194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.295211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.304799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.304986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.305004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.314537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.314709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.314725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.324186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.324357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.324374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.333699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.333875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.333892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.343496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.343696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.343714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.353216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.353424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.362827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.751 [2024-07-12 17:34:42.363015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.751 [2024-07-12 17:34:42.363033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.751 [2024-07-12 17:34:42.372412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.372581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.372598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.381927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.382096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.382112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.391520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.391718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.391735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.401064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.401254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.401271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.410667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.410855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.410873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.420214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.420399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.420416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.429774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.429944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.429960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.439326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.439522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.439539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.448935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.449105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.449122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.458451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.458640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.458657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.468071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.468239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.468256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.477611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.477781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.477798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.487218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.487425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.496819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.497011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.497031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.506403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.506594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.506611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.515984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.516155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.516172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.752 [2024-07-12 17:34:42.525514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:23.752 [2024-07-12 17:34:42.525705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.752 [2024-07-12 17:34:42.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.535286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.535484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.535501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.544850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.545023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.545039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.554391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.554580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.554599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.564152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.564342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.573861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.574049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.574066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.583549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.583725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.583748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.593119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.593291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.593307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.602697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.602885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.602902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.612271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.612469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.621804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.621972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.621988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.631400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.631588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.631605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.641116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.641285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.641302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.650734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.650923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.650940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.660310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.660484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.660501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.669830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.670019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.679415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.679640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.688993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.689164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.689181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.698478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.698664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.698682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.708092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.708260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.708276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.717598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.717783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.727165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.727353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.727370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.736738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.736908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.736924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.746245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.746433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.746449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.755852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.756021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.756038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.765385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.765554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.765571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.774950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.775135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.775152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.011 [2024-07-12 17:34:42.784549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.011 [2024-07-12 17:34:42.784738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.011 [2024-07-12 17:34:42.784755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.794299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.794494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.794513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.803897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.804066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.813415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.813586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.813604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.823189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.823381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.823400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.832892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.833065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.833085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.842472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.842641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.842658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.851966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.852155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.861546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.861743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.861760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.871056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.871228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.871245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.880743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.880934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.880952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.890326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.890500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.890518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.899852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.900021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.900038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.909395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.909585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.909602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.919006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.919180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.919196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.928581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.928767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.928785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.938163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.938335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.938353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.947678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.947848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.947865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.957245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.957433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.957450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.966842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.967012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.967029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.976345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.976524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.976541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.985882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.270 [2024-07-12 17:34:42.986054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.270 [2024-07-12 17:34:42.986071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.270 [2024-07-12 17:34:42.995546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:42.995719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:42.995737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.271 [2024-07-12 17:34:43.005391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:43.005589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:43.005607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.271 [2024-07-12 17:34:43.015030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:43.015203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:43.015220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.271 [2024-07-12 17:34:43.024631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:43.024820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:43.024838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.271 [2024-07-12 17:34:43.034238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:43.034417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:43.034434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.271 [2024-07-12 17:34:43.043814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.271 [2024-07-12 17:34:43.043984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.271 [2024-07-12 17:34:43.044001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.053807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.053982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.054002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.063409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.063579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.063596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.073182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.073395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.082902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.083089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.083106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.092672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.092844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.092862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.102245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.102433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.102450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.111844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.112014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.112031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.121375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.121554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.121572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.130943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.131112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.131129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.140596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.140785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.150173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.150363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.150385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.159743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.159914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.159931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.169271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.169462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.169483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.178856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.179027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.179044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.188352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.188528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.188544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.197914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.198102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.198120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.207478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.207645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.207663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.217020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.217189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.226612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.226781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.226797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.236146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.236314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.245700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.245890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.245906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.255297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.255500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.255517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.264886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.265056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.265073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.274458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.274649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.274666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.284030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.284219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.293669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.293836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.293852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.530 [2024-07-12 17:34:43.303185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.530 [2024-07-12 17:34:43.303374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.530 [2024-07-12 17:34:43.303400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.313020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.313226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.322638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.322827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.322844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.332445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.332635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.332652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.342452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.342627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.342645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.352191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.352364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.352389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.361732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.361900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.361918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.371291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.371467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.371485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.380875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.381046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.381063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.390460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.390651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.390668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.400072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.400244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.400261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.409590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.409760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.409777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.419164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.419355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.419373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.428654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.428825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.428842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.438186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.438358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.438375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.447817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.448016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.448040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.457368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.457561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.466906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.467096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.467113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.476467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.476655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.485993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.486165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.486182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.495554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.495734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.495751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.505077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.790 [2024-07-12 17:34:43.505251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.790 [2024-07-12 17:34:43.505271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.790 [2024-07-12 17:34:43.514742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.514919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.514936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.791 [2024-07-12 17:34:43.524328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.524504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.524522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.791 [2024-07-12 17:34:43.533889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.534076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.534094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.791 [2024-07-12 17:34:43.543449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.543621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.543638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.791 [2024-07-12 17:34:43.553083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.553253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.553269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.791 [2024-07-12 17:34:43.562859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:24.791 [2024-07-12 17:34:43.563049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.791 [2024-07-12 17:34:43.563067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.572595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.572785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.572803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.582440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.582614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.582632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.592161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.592354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.592371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.601875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.602046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.602063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.611464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.611651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.611668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.620991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.621161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.621178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.630564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.630752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.630769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.640139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.640309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.640326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.649663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.649835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.649853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.659226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.659397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.668748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.668918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.678223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.678394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.678411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.687806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.687994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.688011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.697308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.697505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.697523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.706881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.707050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.707066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.716420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.716608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.716636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.725964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.726134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.726151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.050 [2024-07-12 17:34:43.735498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.050 [2024-07-12 17:34:43.735687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.050 [2024-07-12 17:34:43.735704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.745070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.745241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.745258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.754627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.754798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.754816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.764196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.764385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.764402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.773729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.773915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.783214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.783423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.792796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.792964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.792981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.802296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.802471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.802489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.811889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.812061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.812077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.051 [2024-07-12 17:34:43.821392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.051 [2024-07-12 17:34:43.821565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.051 [2024-07-12 17:34:43.821582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.831128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.831302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.831319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.840953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.841145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.850653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.850828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.850845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.860363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.860559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.860577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.869935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.870104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.879519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.879689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.879705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.889183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.889371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.889394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.898746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.898915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.898931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.908266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.908453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.908470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.917859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.918043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.927436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.927609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.310 [2024-07-12 17:34:43.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.310 [2024-07-12 17:34:43.937039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.310 [2024-07-12 17:34:43.937212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.946591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.946762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.946778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.956100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.956270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.956287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.965767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.965951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.975276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.975449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.975466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.984833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.985021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.985038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:43.994410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:43.994579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:43.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:44.003967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:44.004136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:44.004153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:44.013588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:44.013758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:44.013775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 [2024-07-12 17:34:44.023134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe654d0) with pdu=0x2000190fd640 00:26:25.311 [2024-07-12 17:34:44.023305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.311 [2024-07-12 17:34:44.023322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.311 00:26:25.311 Latency(us) 00:26:25.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.311 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.311 nvme0n1 : 2.00 26563.54 103.76 0.00 0.00 4810.41 4445.05 10086.85 00:26:25.311 =================================================================================================================== 00:26:25.311 Total : 26563.54 103.76 0.00 0.00 4810.41 4445.05 10086.85 00:26:25.311 0 00:26:25.311 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:25.311 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:25.311 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.311 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:25.311 | .driver_specific 00:26:25.311 | .nvme_error 00:26:25.311 | .status_code 00:26:25.311 | .command_transient_transport_error' 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 19978 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 19978 ']' 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 19978 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 19978 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 19978' 00:26:25.570 killing process with pid 19978 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 19978 00:26:25.570 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.570 00:26:25.570 Latency(us) 00:26:25.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.570 =================================================================================================================== 00:26:25.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.570 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 19978 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=20491 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 20491 /var/tmp/bperf.sock 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 20491 ']' 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.829 17:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.829 [2024-07-12 17:34:44.501840] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:25.829 [2024-07-12 17:34:44.501886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid20491 ] 00:26:25.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.829 Zero copy mechanism will not be used. 00:26:25.829 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.829 [2024-07-12 17:34:44.556436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.087 [2024-07-12 17:34:44.635852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.655 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.655 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:26.655 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.655 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.914 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.172 nvme0n1 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:27.172 17:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.172 Zero copy mechanism will not be used. 00:26:27.172 Running I/O for 2 seconds... 00:26:27.172 [2024-07-12 17:34:45.849426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.172 [2024-07-12 17:34:45.849810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.849838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.856784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.857154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.857177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.864783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.865146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.865167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.872393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.872732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.872752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.878302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.878650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.878670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.884870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.885203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.885223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.892360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.892723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.892743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.898262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.898583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.898608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.904181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.904536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.904556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.909785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.910123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.910142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.914819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.915172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.919683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.920014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.924625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.924961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.924981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.929639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.929974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.929993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.934469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.934807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.934826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.939415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.939748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.939767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.944647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.944987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.945006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.173 [2024-07-12 17:34:45.949777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.173 [2024-07-12 17:34:45.950119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.173 [2024-07-12 17:34:45.950139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.955348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.955714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.955733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.960442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.960790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.960808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.966017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.966362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.966386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.973177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.973516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.973536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.979819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.980160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.980179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.987342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.987691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.987710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:45.994868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:45.994983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:45.995000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.002797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.003142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.003162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.011199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.011555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.011574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.018336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.018703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.018722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.024138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.024480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.024499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.029518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.029860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.029878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.034793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.035137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.039864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.040195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.040214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.044554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.044887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.044906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.049212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.049561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.049584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.053887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.054211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.054231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.059064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.059399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.059418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.064285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.064616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.064635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.068970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.069305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.069323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.073632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.073971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.073991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.078211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.078554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.078573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.082834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.083167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.083186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.087414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.087781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.092629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.092977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.092996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.097902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.098245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.098264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.102764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.103103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.103123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.107480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.107823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.107843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.112087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.112429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.112448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.116743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.117077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.121341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.121687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.126106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.126451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.126470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.130651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.130982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.131001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.135159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.135506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.135525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.139659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.139998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.140017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.144196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.144526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.144546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.434 [2024-07-12 17:34:46.148752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.434 [2024-07-12 17:34:46.149088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.434 [2024-07-12 17:34:46.149108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.153285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.153621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.153641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.157866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.158190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.158210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.162417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.162760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.162780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.167037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.167375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.171571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.171895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.171917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.176073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.176411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.180642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.180973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.180992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.185560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.185891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.185910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.190050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.190394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.190413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.194590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.194919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.194938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.199024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.199364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.203458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.203791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.203810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.435 [2024-07-12 17:34:46.207991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.435 [2024-07-12 17:34:46.208325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.435 [2024-07-12 17:34:46.208344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.212564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.212899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.212918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.217076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.217422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.217441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.221575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.221931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.226065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.226406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.226425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.230601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.230939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.230958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.235178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.235516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.235536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.239717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.240049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.240067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.244293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.244637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.244656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.248833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.249168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.249187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.253484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.253826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.253845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.258163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.258483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.262815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.263148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.263167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.267328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.267645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.271831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.272164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.272182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.276397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.276734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.276754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.280951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.281277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.281296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.285988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.286320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.286339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.291484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.291816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.291839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.297618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.297965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.303460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.303793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.303811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.309397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.309542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.309560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.316510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.316867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.316886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.323441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.323783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.323801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.330242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.330598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.330617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.337508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.337871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.343948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.344020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.344038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.350828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.351155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.351174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.695 [2024-07-12 17:34:46.358237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.695 [2024-07-12 17:34:46.358614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.695 [2024-07-12 17:34:46.358633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.364829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.365168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.365187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.371492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.371830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.371850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.378108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.378460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.378479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.384464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.384820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.384839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.390773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.391102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.391121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.397703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.398040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.398059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.403316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.403652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.403675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.408978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.409316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.409335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.414511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.414856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.414876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.420516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.420861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.420881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.426463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.426790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.426808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.432081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.432409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.432427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.437874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.438218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.438236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.443942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.444294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.444313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.449620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.449972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.449992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.455041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.455369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.455393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.461357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.461712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.461733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.466748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.467078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.467097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.696 [2024-07-12 17:34:46.471937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.696 [2024-07-12 17:34:46.472280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.696 [2024-07-12 17:34:46.472300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.476807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.477166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.481708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.482062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.482081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.486481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.486828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.486847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.491103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.491437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.491456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.495864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.496193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.496213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.501148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.501481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.501499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.506228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.506582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.506601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.511082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.511429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.511448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.515851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.516180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.516198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.956 [2024-07-12 17:34:46.520592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.956 [2024-07-12 17:34:46.520924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-12 17:34:46.520943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.525303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.525632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.525651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.529955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.530289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.530307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.534652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.534987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.535006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.539339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.539690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.539712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.543952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.544303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.544322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.548648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.548979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.548998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.553269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.553602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.553622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.557944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.558276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.558294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.562661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.563004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.563025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.567374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.567731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.567750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.572130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.572462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.572481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.576742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.577077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.577096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.581419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.581755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.581774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.586068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.586421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.586440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.590701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.591038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.591058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.595373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.595708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.595727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.600002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.604671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.605003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.605022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.609295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.609642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.609662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.614057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.614393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.614412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.618802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.619131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.619151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.623467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.623809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.623828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.628180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.628520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.628539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.632909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.633237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.633255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.638740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.639099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.639118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.645429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.645770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.645790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.651839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.652181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.652199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.659370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.957 [2024-07-12 17:34:46.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.957 [2024-07-12 17:34:46.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.957 [2024-07-12 17:34:46.667117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.667456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.667475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.675541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.675894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.675916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.683287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.683665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.691436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.691794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.691813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.700662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.701002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.701021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.709425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.709800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.717346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.717716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.717736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.958 [2024-07-12 17:34:46.726266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:27.958 [2024-07-12 17:34:46.726606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.958 [2024-07-12 17:34:46.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.734747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.735105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.735125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.743823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.744157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.744177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.752417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.752768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.752788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.759893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.760086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.760102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.768784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.769141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.769161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.777415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.777761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.785831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.786004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.795006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.795390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.803446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.803796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.803815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.811813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.812138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.812157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.820737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.821075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.821094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.829355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.829731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.838643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.838999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.839018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.846985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.847338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.853585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.853941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.853959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.859612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.859948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.859967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.864798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.865142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.865161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.870136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.870500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.876666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.877001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.877021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.883699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.884044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.218 [2024-07-12 17:34:46.884068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.218 [2024-07-12 17:34:46.889791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.218 [2024-07-12 17:34:46.889885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.889902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.895641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.895979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.895998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.900687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.901029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.901048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.905742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.906077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.906096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.911516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.911851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.911870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.916658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.916988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.917008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.921597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.921930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.921949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.926605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.926938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.926956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.931576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.931642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.931659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.936987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.937324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.937343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.943570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.943909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.943928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.949274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.949609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.949628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.955222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.955586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.961928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.962276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.962294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.968131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.968487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.968506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.973824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.974173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.974191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.979391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.979712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.979731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.985020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.985392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.985411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.991118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.219 [2024-07-12 17:34:46.991457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.219 [2024-07-12 17:34:46.991476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.219 [2024-07-12 17:34:46.996242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:46.996586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:46.996605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.001089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.001436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:47.001455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.005860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.006186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:47.006204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.010598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.010933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:47.010951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.015249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.015585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:47.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.020030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.479 [2024-07-12 17:34:47.020388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.479 [2024-07-12 17:34:47.024761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.479 [2024-07-12 17:34:47.025087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.025110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.029522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.029867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.029885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.034199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.034530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.034548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.038893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.039223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.039241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.044092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.044446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.044465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.049246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.049585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.049604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.053976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.054308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.054327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.058667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.059000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.059019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.063285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.063625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.063644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.068044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.068385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.068404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.072655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.072980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.077816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.078161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.078180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.082894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.083228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.083247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.087575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.087895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.087914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.092229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.092561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.092580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.096927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.097271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.097289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.101635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.101980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.106508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.106853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.111258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.111608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.111628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.116134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.116484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.121206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.121543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.126081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.126420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.126439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.131107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.131432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.131451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.135759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.136088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.136106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.140859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.141186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.141204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.147171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.147513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.147532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.154113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.154201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.154220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.161276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.161617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.161635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.167978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.168335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.480 [2024-07-12 17:34:47.168354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.480 [2024-07-12 17:34:47.174833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.480 [2024-07-12 17:34:47.175165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.175183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.181205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.181547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.181566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.187206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.187561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.187581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.194036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.194393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.194412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.200707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.201046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.201065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.207326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.207687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.207706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.214218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.214565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.214585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.220812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.221153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.221172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.227449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.227794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.227813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.233682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.234021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.234039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.239569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.239903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.239922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.245310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.245653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.245672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.251107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.251499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.481 [2024-07-12 17:34:47.256948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.481 [2024-07-12 17:34:47.257286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.481 [2024-07-12 17:34:47.257305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.262832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.263171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.263196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.268572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.268919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.268938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.274152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.274501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.274520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.279277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.279613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.279633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.284856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.285198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.285218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.291219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.291564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.291583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.297281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.297630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.297650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.302879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.303233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.303252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.308946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.309297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.309316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.314391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.314732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.314754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.319619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.319948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.325165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.325512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.325531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.330550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.330887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.330907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.335779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.336115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.336133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.341391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.341720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.341739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.742 [2024-07-12 17:34:47.348160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.742 [2024-07-12 17:34:47.348498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.742 [2024-07-12 17:34:47.348517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.354893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.355234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.355253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.360977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.361315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.361333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.366675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.367017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.367036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.371831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.372168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.372188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.376781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.377140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.382547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.382882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.389665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.390072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.390091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.395965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.396304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.396323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.402022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.402366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.402392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.407933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.408275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.413512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.413855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.413877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.418601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.418928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.423520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.423858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.423877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.430795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.431122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.431141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.435761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.436095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.440708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.441045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.441065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.445563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.445910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.445929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.450459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.450792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.450810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.455362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.455722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.455741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.460344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.460458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.460475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.465789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.466091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.466110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.470278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.470589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.470608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.474780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.475089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.475108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.479687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.479995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.480015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.484709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.485007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.485026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.491124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.491434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.491454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.497428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.497782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.497803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.504038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.504345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.743 [2024-07-12 17:34:47.504364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.743 [2024-07-12 17:34:47.510527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.743 [2024-07-12 17:34:47.510911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.744 [2024-07-12 17:34:47.510931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.744 [2024-07-12 17:34:47.517772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:28.744 [2024-07-12 17:34:47.518144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.744 [2024-07-12 17:34:47.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.525254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.525635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.525655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.532758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.533162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.533181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.540472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.540861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.540880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.548660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.549098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.549117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.555985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.556311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.556329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.562854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.563168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.563186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.569838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.570217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.570239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.576734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.577052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.577071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.583488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.583800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.583819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.590098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.590432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.590451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.596342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.596655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.596673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.602255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.602602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.602621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.609400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.609738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.609757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.616130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.616469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.616489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.623295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.623647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.623666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.630626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.630993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.631012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.637557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.637931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.637950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.644730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.645036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.645055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.651474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.651829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.651848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.658015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.658307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.004 [2024-07-12 17:34:47.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.004 [2024-07-12 17:34:47.664911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.004 [2024-07-12 17:34:47.665285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.665303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.673026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.673364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.673388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.681052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.681458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.681477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.688913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.689335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.689353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.697434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.697811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.697829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.704977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.705369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.705394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.712998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.713418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.713436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.720658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.721044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.721062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.728221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.728639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.735917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.736315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.736334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.743971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.744363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.744387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.751644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.751976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.751995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.759546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.759933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.759955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.767213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.767479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.767497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.005 [2024-07-12 17:34:47.775104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.005 [2024-07-12 17:34:47.775515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.005 [2024-07-12 17:34:47.775534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.783005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.783428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.790325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.790679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.790697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.796175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.796481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.796500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.801651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.801935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.806494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.806791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.806810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.811633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.811926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.811946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.816332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.816662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.816680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.821543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.821833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.821852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.827220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.827525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.827544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.833149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.833446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.833465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.839210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.839539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.265 [2024-07-12 17:34:47.844746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe65810) with pdu=0x2000190fef90 00:26:29.265 [2024-07-12 17:34:47.845032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-07-12 17:34:47.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.265 00:26:29.266 Latency(us) 00:26:29.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.266 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:29.266 nvme0n1 : 2.00 5307.75 663.47 0.00 0.00 3009.96 1980.33 9687.93 00:26:29.266 =================================================================================================================== 00:26:29.266 Total : 5307.75 663.47 0.00 0.00 3009.96 1980.33 9687.93 00:26:29.266 0 00:26:29.266 17:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:29.266 17:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:29.266 17:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:29.266 | .driver_specific 00:26:29.266 | .nvme_error 00:26:29.266 | .status_code 00:26:29.266 | .command_transient_transport_error' 00:26:29.266 17:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 342 > 0 )) 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 20491 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 20491 ']' 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 20491 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 20491 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 20491' 00:26:29.525 killing process with pid 20491 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 20491 00:26:29.525 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.525 00:26:29.525 Latency(us) 00:26:29.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.525 =================================================================================================================== 00:26:29.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 20491 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 18486 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 18486 ']' 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 18486 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.525 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18486 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18486' 00:26:29.784 killing process with pid 18486 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 18486 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 18486 00:26:29.784 00:26:29.784 real 0m16.699s 00:26:29.784 user 0m32.072s 00:26:29.784 sys 0m4.396s 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.784 ************************************ 00:26:29.784 END TEST nvmf_digest_error 00:26:29.784 ************************************ 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.784 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.784 rmmod nvme_tcp 00:26:29.784 rmmod nvme_fabrics 00:26:30.043 rmmod nvme_keyring 00:26:30.043 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.043 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:30.043 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 18486 ']' 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 18486 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 18486 ']' 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 18486 00:26:30.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (18486) - No such process 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 18486 is not found' 00:26:30.044 Process with pid 18486 is not found 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.044 17:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.949 17:34:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.949 00:26:31.949 real 0m40.782s 00:26:31.949 user 1m5.640s 00:26:31.949 sys 0m12.615s 00:26:31.949 17:34:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.949 17:34:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:31.949 ************************************ 00:26:31.949 END TEST nvmf_digest 00:26:31.949 ************************************ 00:26:31.949 17:34:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:31.949 17:34:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:31.949 17:34:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:31.949 17:34:50 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:31.949 17:34:50 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:31.949 17:34:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:31.949 17:34:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.949 17:34:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.949 ************************************ 00:26:31.949 START TEST nvmf_bdevperf 00:26:31.949 ************************************ 00:26:31.949 17:34:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:32.209 * Looking for test storage... 00:26:32.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:32.209 17:34:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.482 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:37.483 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:37.483 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:37.483 Found net devices under 0000:86:00.0: cvl_0_0 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:37.483 Found net devices under 0000:86:00.1: cvl_0_1 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.483 17:34:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:37.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:26:37.483 00:26:37.483 --- 10.0.0.2 ping statistics --- 00:26:37.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.483 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:26:37.483 00:26:37.483 --- 10.0.0.1 ping statistics --- 00:26:37.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.483 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.483 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=24610 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 24610 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 24610 ']' 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.484 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.484 [2024-07-12 17:34:56.092492] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:37.484 [2024-07-12 17:34:56.092540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.484 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.484 [2024-07-12 17:34:56.149829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:37.484 [2024-07-12 17:34:56.229783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.484 [2024-07-12 17:34:56.229818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.484 [2024-07-12 17:34:56.229825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.484 [2024-07-12 17:34:56.229832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.484 [2024-07-12 17:34:56.229837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.484 [2024-07-12 17:34:56.229935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.484 [2024-07-12 17:34:56.230019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.484 [2024-07-12 17:34:56.230021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 [2024-07-12 17:34:56.942169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 Malloc0 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.420 17:34:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.420 [2024-07-12 17:34:57.005123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.420 { 00:26:38.420 "params": { 00:26:38.420 "name": "Nvme$subsystem", 00:26:38.420 "trtype": "$TEST_TRANSPORT", 00:26:38.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.420 "adrfam": "ipv4", 00:26:38.420 "trsvcid": "$NVMF_PORT", 00:26:38.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.420 "hdgst": ${hdgst:-false}, 00:26:38.420 "ddgst": ${ddgst:-false} 00:26:38.420 }, 00:26:38.420 "method": "bdev_nvme_attach_controller" 00:26:38.420 } 00:26:38.420 EOF 00:26:38.420 )") 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:38.420 17:34:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.420 "params": { 00:26:38.420 "name": "Nvme1", 00:26:38.420 "trtype": "tcp", 00:26:38.420 "traddr": "10.0.0.2", 00:26:38.420 "adrfam": "ipv4", 00:26:38.420 "trsvcid": "4420", 00:26:38.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.420 "hdgst": false, 00:26:38.420 "ddgst": false 00:26:38.420 }, 00:26:38.420 "method": "bdev_nvme_attach_controller" 00:26:38.420 }' 00:26:38.420 [2024-07-12 17:34:57.056486] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:38.420 [2024-07-12 17:34:57.056527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24711 ] 00:26:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.420 [2024-07-12 17:34:57.109744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.420 [2024-07-12 17:34:57.183279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.678 Running I/O for 1 seconds... 00:26:39.671 00:26:39.671 Latency(us) 00:26:39.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.671 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:39.671 Verification LBA range: start 0x0 length 0x4000 00:26:39.671 Nvme1n1 : 1.01 10997.92 42.96 0.00 0.00 11593.50 2293.76 14930.81 00:26:39.671 =================================================================================================================== 00:26:39.671 Total : 10997.92 42.96 0.00 0.00 11593.50 2293.76 14930.81 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=24940 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:39.931 { 00:26:39.931 "params": { 00:26:39.931 "name": "Nvme$subsystem", 00:26:39.931 "trtype": "$TEST_TRANSPORT", 00:26:39.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:39.931 "adrfam": "ipv4", 00:26:39.931 "trsvcid": "$NVMF_PORT", 00:26:39.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:39.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:39.931 "hdgst": ${hdgst:-false}, 00:26:39.931 "ddgst": ${ddgst:-false} 00:26:39.931 }, 00:26:39.931 "method": "bdev_nvme_attach_controller" 00:26:39.931 } 00:26:39.931 EOF 00:26:39.931 )") 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:39.931 17:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:39.931 "params": { 00:26:39.931 "name": "Nvme1", 00:26:39.931 "trtype": "tcp", 00:26:39.931 "traddr": "10.0.0.2", 00:26:39.931 "adrfam": "ipv4", 00:26:39.931 "trsvcid": "4420", 00:26:39.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:39.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:39.931 "hdgst": false, 00:26:39.931 "ddgst": false 00:26:39.931 }, 00:26:39.931 "method": "bdev_nvme_attach_controller" 00:26:39.931 }' 00:26:39.931 [2024-07-12 17:34:58.576857] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:39.931 [2024-07-12 17:34:58.576905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24940 ] 00:26:39.931 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.931 [2024-07-12 17:34:58.632308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.931 [2024-07-12 17:34:58.702026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.190 Running I/O for 15 seconds... 00:26:43.484 17:35:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 24610 00:26:43.484 17:35:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:43.484 [2024-07-12 17:35:01.552102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.484 [2024-07-12 17:35:01.552236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.484 [2024-07-12 17:35:01.552395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.484 [2024-07-12 17:35:01.552403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.485 [2024-07-12 17:35:01.552522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.485 [2024-07-12 17:35:01.552934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.485 [2024-07-12 17:35:01.552942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.552953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.552961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.552971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.552979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.552997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.486 [2024-07-12 17:35:01.553442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.486 [2024-07-12 17:35:01.553451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-07-12 17:35:01.553883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.487 [2024-07-12 17:35:01.553897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.487 [2024-07-12 17:35:01.553936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.487 [2024-07-12 17:35:01.553942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.553950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.553957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.553964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.553971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.553979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.553985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.553993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.553999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.488 [2024-07-12 17:35:01.554114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbc70 is same with the state(5) to be set 00:26:43.488 [2024-07-12 17:35:01.554129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:43.488 [2024-07-12 17:35:01.554134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:43.488 [2024-07-12 17:35:01.554140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96368 len:8 PRP1 0x0 PRP2 0x0 00:26:43.488 [2024-07-12 17:35:01.554151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554193] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7fbc70 was disconnected and freed. reset controller. 00:26:43.488 [2024-07-12 17:35:01.554235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.488 [2024-07-12 17:35:01.554244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.488 [2024-07-12 17:35:01.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.488 [2024-07-12 17:35:01.554273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.488 [2024-07-12 17:35:01.554286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.488 [2024-07-12 17:35:01.554292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.488 [2024-07-12 17:35:01.557140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.488 [2024-07-12 17:35:01.557166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.488 [2024-07-12 17:35:01.557794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.488 [2024-07-12 17:35:01.557810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.488 [2024-07-12 17:35:01.557817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.488 [2024-07-12 17:35:01.557996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.488 [2024-07-12 17:35:01.558172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.488 [2024-07-12 17:35:01.558181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.488 [2024-07-12 17:35:01.558189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.488 [2024-07-12 17:35:01.561022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.488 [2024-07-12 17:35:01.570528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.488 [2024-07-12 17:35:01.570911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.488 [2024-07-12 17:35:01.570929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.488 [2024-07-12 17:35:01.570936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.488 [2024-07-12 17:35:01.571113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.488 [2024-07-12 17:35:01.571289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.488 [2024-07-12 17:35:01.571297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.488 [2024-07-12 17:35:01.571304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.488 [2024-07-12 17:35:01.574103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.488 [2024-07-12 17:35:01.583317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.488 [2024-07-12 17:35:01.583741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.488 [2024-07-12 17:35:01.583785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.488 [2024-07-12 17:35:01.583807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.488 [2024-07-12 17:35:01.584399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.488 [2024-07-12 17:35:01.584831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.488 [2024-07-12 17:35:01.584839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.488 [2024-07-12 17:35:01.584845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.488 [2024-07-12 17:35:01.587596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.488 [2024-07-12 17:35:01.596247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.488 [2024-07-12 17:35:01.596565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.488 [2024-07-12 17:35:01.596581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.596588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.596759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.596938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.596946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.596952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.599637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.609085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.609450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.609466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.609473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.609644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.609819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.609827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.609833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.612518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.621969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.622344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.622360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.622367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.622545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.622716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.622723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.622729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.625410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.634856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.635167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.635183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.635189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.635360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.635539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.635547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.635553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.638286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.647675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.648067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.648074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.648245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.648423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.648431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.648437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.651116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.660565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.660879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.660893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.660900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.661070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.661242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.661250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.661256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.489 [2024-07-12 17:35:01.663940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.489 [2024-07-12 17:35:01.673390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.489 [2024-07-12 17:35:01.673749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.489 [2024-07-12 17:35:01.673764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.489 [2024-07-12 17:35:01.673771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.489 [2024-07-12 17:35:01.673942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.489 [2024-07-12 17:35:01.674112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.489 [2024-07-12 17:35:01.674120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.489 [2024-07-12 17:35:01.674126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.676809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.686533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.686932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.686949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.686959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.687136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.687315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.687324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.687330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.690166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.699685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.699994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.700009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.700016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.700192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.700369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.700383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.700391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.703216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.712831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.713264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.713279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.713286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.713474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.713656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.713664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.713671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.716582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.726007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.726461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.726478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.726485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.726667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.726871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.726883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.726890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.729856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.739371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.739840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.739857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.739864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.740057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.740252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.740261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.740267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.743261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.752648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.753115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.753131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.753138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.753332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.753532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.753542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.753548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.756534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.765817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.766246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.766262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.766269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.766454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.766636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.490 [2024-07-12 17:35:01.766644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.490 [2024-07-12 17:35:01.766650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.490 [2024-07-12 17:35:01.769559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.490 [2024-07-12 17:35:01.779093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.490 [2024-07-12 17:35:01.779568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.490 [2024-07-12 17:35:01.779585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.490 [2024-07-12 17:35:01.779592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.490 [2024-07-12 17:35:01.779786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.490 [2024-07-12 17:35:01.779980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.779989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.779996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.783085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.792258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.792749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.792765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.792773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.792967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.793161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.793169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.793176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.796219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.805632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.806096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.806113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.806120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.806315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.806515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.806524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.806531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.809535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.818835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.819266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.819281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.819289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.819480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.819662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.819670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.819676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.822600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.831955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.832325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.832340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.832347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.832528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.832704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.832712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.832718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.835550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.844871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.845308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.845324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.845330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.845506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.845677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.845685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.845691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.848371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.857674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.858034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.858050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.858057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.858227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.858406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.858414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.858424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.861145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.870597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.870982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.870997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.871004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.871175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.871345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.871353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.871359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.874042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.883486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.883850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.883865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.491 [2024-07-12 17:35:01.883872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.491 [2024-07-12 17:35:01.884042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.491 [2024-07-12 17:35:01.884212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.491 [2024-07-12 17:35:01.884220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.491 [2024-07-12 17:35:01.884226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.491 [2024-07-12 17:35:01.886913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.491 [2024-07-12 17:35:01.896344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.491 [2024-07-12 17:35:01.896801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.491 [2024-07-12 17:35:01.896843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.896865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.897397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.897579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.897586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.897592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.900183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.909151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.909587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.909602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.909608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.909770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.909931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.909939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.909944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.912614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.922047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.922448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.922464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.922470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.922641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.922812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.922820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.922826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.925503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.934929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.935359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.935374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.935387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.935558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.935729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.935737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.935743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.938498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.947757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.948214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.948228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.948235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.948410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.948585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.948592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.948598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.951271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.960631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.961059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.961104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.961125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.961667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.961840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.961848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.961854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.964545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.973556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.974012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.974027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.974034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.974205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.974376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.974391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.974397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.977070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.986350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.986874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.986896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.987442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.987614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.987622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.987628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.492 [2024-07-12 17:35:01.990304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.492 [2024-07-12 17:35:01.999211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.492 [2024-07-12 17:35:01.999587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.492 [2024-07-12 17:35:01.999603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.492 [2024-07-12 17:35:01.999609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.492 [2024-07-12 17:35:01.999780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.492 [2024-07-12 17:35:01.999950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.492 [2024-07-12 17:35:01.999958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.492 [2024-07-12 17:35:01.999964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.002643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.012002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.012464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.012506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.012528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.012979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.013200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.013211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.013220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.017281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.025408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.025832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.025848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.025854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.026024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.026195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.026202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.026208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.028928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.038300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.038673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.038689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.038700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.038871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.039041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.039049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.039055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.041730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.051158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.051585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.051601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.051607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.051769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.051930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.051937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.051943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.054606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.063990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.064420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.064437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.064444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.064621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.064798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.064806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.064812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.067637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.077105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.077529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.077545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.077552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.077723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.077898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.077912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.077918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.080665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.090115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.090473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.090489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.090495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.090667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.090838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.090846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.090852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.093532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.102950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.103386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.103445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.104017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.104179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.104186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.104192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.493 [2024-07-12 17:35:02.106923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.493 [2024-07-12 17:35:02.115873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.493 [2024-07-12 17:35:02.116303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.493 [2024-07-12 17:35:02.116318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.493 [2024-07-12 17:35:02.116324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.493 [2024-07-12 17:35:02.116501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.493 [2024-07-12 17:35:02.116673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.493 [2024-07-12 17:35:02.116680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.493 [2024-07-12 17:35:02.116686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.119358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.128680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.129122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.129163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.129184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.129777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.130255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.130263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.130269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.132940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.141539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.141973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.141989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.141995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.142166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.142338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.142346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.142352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.145034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.154366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.154821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.154863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.154885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.155323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.155500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.155509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.155515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.158186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.167242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.167680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.167696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.167702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.167877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.168048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.168056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.168062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.170737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.180157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.180584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.180599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.180606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.180776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.180946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.180954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.180960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.183639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.193062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.193473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.193516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.193538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.194115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.194368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.194376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.194387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.198194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.206624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.207056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.207071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.207078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.207249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.207427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.207435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.207445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.210152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.219523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.219973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.220010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.220032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.494 [2024-07-12 17:35:02.220626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.494 [2024-07-12 17:35:02.221140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.494 [2024-07-12 17:35:02.221148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.494 [2024-07-12 17:35:02.221154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.494 [2024-07-12 17:35:02.223828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.494 [2024-07-12 17:35:02.232444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.494 [2024-07-12 17:35:02.232875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.494 [2024-07-12 17:35:02.232890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.494 [2024-07-12 17:35:02.232897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.495 [2024-07-12 17:35:02.233068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.495 [2024-07-12 17:35:02.233243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.495 [2024-07-12 17:35:02.233250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.495 [2024-07-12 17:35:02.233257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.495 [2024-07-12 17:35:02.235938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.495 [2024-07-12 17:35:02.245295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.495 [2024-07-12 17:35:02.245702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.495 [2024-07-12 17:35:02.245717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.495 [2024-07-12 17:35:02.245724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.495 [2024-07-12 17:35:02.245895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.495 [2024-07-12 17:35:02.246067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.495 [2024-07-12 17:35:02.246074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.495 [2024-07-12 17:35:02.246080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.495 [2024-07-12 17:35:02.248758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.755 [2024-07-12 17:35:02.258296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.755 [2024-07-12 17:35:02.258760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.755 [2024-07-12 17:35:02.258775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.755 [2024-07-12 17:35:02.258782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.755 [2024-07-12 17:35:02.258957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.755 [2024-07-12 17:35:02.259133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.755 [2024-07-12 17:35:02.259141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.755 [2024-07-12 17:35:02.259147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.755 [2024-07-12 17:35:02.261844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.755 [2024-07-12 17:35:02.271207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.755 [2024-07-12 17:35:02.271648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.755 [2024-07-12 17:35:02.271690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.755 [2024-07-12 17:35:02.271711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.755 [2024-07-12 17:35:02.272191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.755 [2024-07-12 17:35:02.272362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.755 [2024-07-12 17:35:02.272370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.755 [2024-07-12 17:35:02.272376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.755 [2024-07-12 17:35:02.275078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.755 [2024-07-12 17:35:02.284133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.755 [2024-07-12 17:35:02.284572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.755 [2024-07-12 17:35:02.284587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.755 [2024-07-12 17:35:02.284593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.755 [2024-07-12 17:35:02.284755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.755 [2024-07-12 17:35:02.284916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.284924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.284929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.287597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.297019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.297432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.297447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.297454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.297618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.297779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.297787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.297792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.300453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.309963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.310391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.310406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.310413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.310584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.310756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.310763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.310769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.313553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.322740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.323171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.323187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.323194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.323364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.323563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.323572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.323578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.326397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.335973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.336385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.336401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.336409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.336586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.336768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.336775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.336781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.339533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.349032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.349473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.349516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.349538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.350116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.350657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.350666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.350671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.353344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.361846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.362295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.362337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.362358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.362849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.363021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.363028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.363034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.365709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.374639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.375052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.375067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.375073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.375235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.375401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.375425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.375431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.378101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.387526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.756 [2024-07-12 17:35:02.387954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.756 [2024-07-12 17:35:02.387970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.756 [2024-07-12 17:35:02.387979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.756 [2024-07-12 17:35:02.388149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.756 [2024-07-12 17:35:02.388321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.756 [2024-07-12 17:35:02.388329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.756 [2024-07-12 17:35:02.388335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.756 [2024-07-12 17:35:02.391021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.756 [2024-07-12 17:35:02.400445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.400800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.400816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.400823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.400993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.401164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.401172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.401178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.403856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.413274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.413704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.413720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.413726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.413896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.414067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.414074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.414081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.416758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.426173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.426604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.426619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.426626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.426796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.426970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.426978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.426983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.429660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.439055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.439465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.439507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.439529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.439956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.440132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.440139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.440145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.442739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.451892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.452293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.452308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.452314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.452502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.452674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.452681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.452687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.455360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.464671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.465098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.465113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.465120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.465291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.465473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.465481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.465487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.468159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.477526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.477947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.477989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.478010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.478600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.478829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.478837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.478843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.481515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.490460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.490842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.490856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.490863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.491024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.491185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.757 [2024-07-12 17:35:02.491192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.757 [2024-07-12 17:35:02.491198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.757 [2024-07-12 17:35:02.493883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.757 [2024-07-12 17:35:02.503302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.757 [2024-07-12 17:35:02.503752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.757 [2024-07-12 17:35:02.503794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.757 [2024-07-12 17:35:02.503815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.757 [2024-07-12 17:35:02.504406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.757 [2024-07-12 17:35:02.504986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.758 [2024-07-12 17:35:02.505009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.758 [2024-07-12 17:35:02.505030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.758 [2024-07-12 17:35:02.507764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.758 [2024-07-12 17:35:02.516243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.758 [2024-07-12 17:35:02.516674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.758 [2024-07-12 17:35:02.516689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.758 [2024-07-12 17:35:02.516699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.758 [2024-07-12 17:35:02.516870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.758 [2024-07-12 17:35:02.517040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.758 [2024-07-12 17:35:02.517048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.758 [2024-07-12 17:35:02.517054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.758 [2024-07-12 17:35:02.519733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.758 [2024-07-12 17:35:02.529234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.758 [2024-07-12 17:35:02.529691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.758 [2024-07-12 17:35:02.529733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:43.758 [2024-07-12 17:35:02.529754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:43.758 [2024-07-12 17:35:02.530238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:43.758 [2024-07-12 17:35:02.530416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.758 [2024-07-12 17:35:02.530425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.758 [2024-07-12 17:35:02.530432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.758 [2024-07-12 17:35:02.533170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.542337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.542700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.542716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.542723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.542894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.543069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.543077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.543083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.545762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.555178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.555609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.555624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.555631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.555801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.555971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.555979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.555988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.558729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.567963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.568391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.568407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.568413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.568584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.568755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.568763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.568769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.571563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.580775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.581221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.581237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.581244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.581428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.581604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.581612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.581618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.584450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.593866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.594307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.594349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.594370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.594961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.595402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.595410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.595417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.598150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.606883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.607308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.607323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.607330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.607513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.607696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.019 [2024-07-12 17:35:02.607703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.019 [2024-07-12 17:35:02.607709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.019 [2024-07-12 17:35:02.610383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.019 [2024-07-12 17:35:02.619800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.019 [2024-07-12 17:35:02.620227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.019 [2024-07-12 17:35:02.620242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.019 [2024-07-12 17:35:02.620249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.019 [2024-07-12 17:35:02.620426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.019 [2024-07-12 17:35:02.620598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.620606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.620612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.623282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.632732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.633149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.633164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.633171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.633342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.633518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.633526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.633532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.636206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.645571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.646017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.646056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.646077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.646676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.647256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.647279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.647303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.649973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.658484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.658902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.658943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.658964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.659430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.659603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.659610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.659616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.662265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.671368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.671806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.671821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.671827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.671998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.672168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.672176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.672181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.674861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.684238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.684657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.684673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.684679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.684850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.685021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.685029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.685035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.687715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.697079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.697533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.697549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.697555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.697726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.697901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.697909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.697915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.700592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.709879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.710320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.710362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.710396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.710975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.711470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.711479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.711485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.714160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.722701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.723139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.723177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.723199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.723790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.724370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.724415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.724421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.727094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.735607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.736023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.736077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.736098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.736606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.736778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.736786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.736792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.739467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.748436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.748871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.748912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.748934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.749421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.749594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.020 [2024-07-12 17:35:02.749601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.020 [2024-07-12 17:35:02.749607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.020 [2024-07-12 17:35:02.753468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.020 [2024-07-12 17:35:02.761942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.020 [2024-07-12 17:35:02.762397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.020 [2024-07-12 17:35:02.762439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.020 [2024-07-12 17:35:02.762460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.020 [2024-07-12 17:35:02.763037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.020 [2024-07-12 17:35:02.763621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.021 [2024-07-12 17:35:02.763630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.021 [2024-07-12 17:35:02.763635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.021 [2024-07-12 17:35:02.766341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.021 [2024-07-12 17:35:02.774752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.021 [2024-07-12 17:35:02.775168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.021 [2024-07-12 17:35:02.775209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.021 [2024-07-12 17:35:02.775230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.021 [2024-07-12 17:35:02.775732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.021 [2024-07-12 17:35:02.775908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.021 [2024-07-12 17:35:02.775916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.021 [2024-07-12 17:35:02.775922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.021 [2024-07-12 17:35:02.778596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.021 [2024-07-12 17:35:02.787662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.021 [2024-07-12 17:35:02.788015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.021 [2024-07-12 17:35:02.788030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.021 [2024-07-12 17:35:02.788036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.021 [2024-07-12 17:35:02.788207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.021 [2024-07-12 17:35:02.788384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.021 [2024-07-12 17:35:02.788392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.021 [2024-07-12 17:35:02.788399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.021 [2024-07-12 17:35:02.791135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.800630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.801066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.801081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.801088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.801259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.801441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.801449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.801456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.804194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.813440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.813887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.813928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.813949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.814500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.814672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.814680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.814686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.817359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.826332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.826767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.826800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.826823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.827414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.827606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.827614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.827620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.830289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.839417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.839881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.839896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.839903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.840079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.840255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.840264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.840270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.843093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.852389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.852802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.852817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.852824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.852994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.853164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.853171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.853177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.855924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.865295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.865756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.865772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.865782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.865953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.866124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.866132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.866138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.868820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.878112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.878518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.878534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.878541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.878711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.878883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.878891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.878896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.881582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.891179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.891601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.891618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.891624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.891795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.891965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.891973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.891979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.894659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.904043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.904427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.904469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.904491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.905032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.905204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.905212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.905222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.907946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.916912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.917362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.917384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.917391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.917562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.917733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.917740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.917746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.920427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.930094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.930470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.930486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.930493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.930670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.930846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.930854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.930861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.933689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.943062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.943540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.943582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.943603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.944191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.944353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.944361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.944367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.947057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.955907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.956341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.956356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.956362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.956539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.956709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.956717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.956723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.959400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.968728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.969193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.969233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.969255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.969846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.970330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.970338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.970344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.973027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.981556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.982007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.982022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.982029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.982199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.982372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.982395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.982402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.985078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:02.994551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:02.994937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:02.994978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:02.995001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:02.995600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:02.996181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:02.996203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:02.996209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:02.998894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:03.007428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:03.007804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:03.007820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.288 [2024-07-12 17:35:03.007827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.288 [2024-07-12 17:35:03.007998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.288 [2024-07-12 17:35:03.008168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.288 [2024-07-12 17:35:03.008176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.288 [2024-07-12 17:35:03.008183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.288 [2024-07-12 17:35:03.010864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.288 [2024-07-12 17:35:03.020408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.288 [2024-07-12 17:35:03.020730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.288 [2024-07-12 17:35:03.020773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.289 [2024-07-12 17:35:03.020794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.289 [2024-07-12 17:35:03.021367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.289 [2024-07-12 17:35:03.021545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.289 [2024-07-12 17:35:03.021554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.289 [2024-07-12 17:35:03.021560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.289 [2024-07-12 17:35:03.024234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.289 [2024-07-12 17:35:03.033219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.289 [2024-07-12 17:35:03.033574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-12 17:35:03.033590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.289 [2024-07-12 17:35:03.033596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.289 [2024-07-12 17:35:03.033768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.289 [2024-07-12 17:35:03.033939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.289 [2024-07-12 17:35:03.033947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.289 [2024-07-12 17:35:03.033956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.289 [2024-07-12 17:35:03.036640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.289 [2024-07-12 17:35:03.046336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.289 [2024-07-12 17:35:03.046770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-12 17:35:03.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.289 [2024-07-12 17:35:03.046794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.289 [2024-07-12 17:35:03.046970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.289 [2024-07-12 17:35:03.047147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.289 [2024-07-12 17:35:03.047156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.289 [2024-07-12 17:35:03.047164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.289 [2024-07-12 17:35:03.049998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.289 [2024-07-12 17:35:03.059511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.289 [2024-07-12 17:35:03.059890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.289 [2024-07-12 17:35:03.059906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.289 [2024-07-12 17:35:03.059913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.289 [2024-07-12 17:35:03.060089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.289 [2024-07-12 17:35:03.060267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.289 [2024-07-12 17:35:03.060275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.289 [2024-07-12 17:35:03.060281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.289 [2024-07-12 17:35:03.063114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.072621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.072997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.073020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.073197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.073373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.073387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.073394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.076215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.085728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.086094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.086113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.086120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.086296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.086480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.086489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.086495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.089321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.098839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.099289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.099306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.099313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.099495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.099672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.099680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.099686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.102510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.112028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.112492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.112508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.112514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.112690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.112867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.112875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.112881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.115706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.125219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.125668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.125684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.125691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.125867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.126047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.126055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.126061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.128888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.138403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.138853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.138868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.138875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.139051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.139228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.139236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.139243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.142068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.151491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.151948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.151963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.151970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.152146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.152323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.152331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.152337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.155166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.164676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.165028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.165044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.165051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.165227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.165409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.165417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.165424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.168248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.177761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.178213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.178228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.178235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.178418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.178595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.178602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.178608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.181431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.190952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.191299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.191315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.191321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.191503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.191680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.191688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.191693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.194516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.204023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.204449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.204465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.204472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.204647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.204823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.204832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.204838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.207666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.217168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.217545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.217561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.217571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.217747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.217923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.217931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.217937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.220761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.230267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.550 [2024-07-12 17:35:03.230719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-07-12 17:35:03.230735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.550 [2024-07-12 17:35:03.230742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.550 [2024-07-12 17:35:03.230918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.550 [2024-07-12 17:35:03.231112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.550 [2024-07-12 17:35:03.231120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.550 [2024-07-12 17:35:03.231126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.550 [2024-07-12 17:35:03.233982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.550 [2024-07-12 17:35:03.243312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.243760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.243776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.243782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.243959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.244136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.244144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.244150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.246980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.256491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.256935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.256950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.256957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.257133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.257309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.257320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.257326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.260153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.269666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.270112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.270127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.270134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.270310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.270492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.270500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.270506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.273326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.282842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.283282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.283325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.283346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.283914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.284091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.284099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.284105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.286932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.295938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.296322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.296362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.296398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.296919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.297096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.297104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.297110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.299935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.308982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.309451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.309467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.309474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.309657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.309828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.309836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.309843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.312583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.551 [2024-07-12 17:35:03.321854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.551 [2024-07-12 17:35:03.322330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-07-12 17:35:03.322371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.551 [2024-07-12 17:35:03.322410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.551 [2024-07-12 17:35:03.322922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.551 [2024-07-12 17:35:03.323093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.551 [2024-07-12 17:35:03.323101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.551 [2024-07-12 17:35:03.323107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.551 [2024-07-12 17:35:03.325904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.335042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.335508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.335524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.335531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.335703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.335874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.335882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.335888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.338648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.347887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.348341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.348357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.348363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.348545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.348716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.348724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.348730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.351406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.360959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.361407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.361423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.361430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.361612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.361783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.361791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.361797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.364586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.373849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.374318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.374359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.374395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.374832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.375002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.375010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.375016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.377755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.386739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.387155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.387197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.387218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.387811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.388059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.388067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.388080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.390766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.399577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.399921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.399936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.399942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.400104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.400266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.400273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.400278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.402970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.811 [2024-07-12 17:35:03.412491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.811 [2024-07-12 17:35:03.412899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.811 [2024-07-12 17:35:03.412914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.811 [2024-07-12 17:35:03.412920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.811 [2024-07-12 17:35:03.413082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.811 [2024-07-12 17:35:03.413242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.811 [2024-07-12 17:35:03.413250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.811 [2024-07-12 17:35:03.413256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.811 [2024-07-12 17:35:03.415939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.425359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.425810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.425851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.425872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.426356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.426546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.426555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.426561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.429236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.438201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.438633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.438652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.438659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.438830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.439000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.439008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.439014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.441719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.451163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.451619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.451658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.451680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.452249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.452427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.452436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.452442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.455113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.463967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.464408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.464451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.464472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.465050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.465482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.465490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.465496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.468103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.476803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.477268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.477309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.477330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.477918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.478176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.478187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.478196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.482245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.490247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.490621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.490637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.490644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.490814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.490985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.490992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.490998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.493728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.503091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.503484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.503500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.503506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.503668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.503829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.503836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.503842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.506502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.516238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.516686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.516702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.516708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.516879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.517050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.517058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.517064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.519740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.529171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.529579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.529594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.529600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.529761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.529923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.529930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.529936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.532606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.542027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.542455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.542471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.542478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.542649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.542823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.542831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.542837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.812 [2024-07-12 17:35:03.545524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.812 [2024-07-12 17:35:03.554948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.812 [2024-07-12 17:35:03.555407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.812 [2024-07-12 17:35:03.555424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.812 [2024-07-12 17:35:03.555430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.812 [2024-07-12 17:35:03.555612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.812 [2024-07-12 17:35:03.555772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.812 [2024-07-12 17:35:03.555779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.812 [2024-07-12 17:35:03.555784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.813 [2024-07-12 17:35:03.558559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.813 [2024-07-12 17:35:03.567843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.813 [2024-07-12 17:35:03.568215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-12 17:35:03.568231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.813 [2024-07-12 17:35:03.568240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.813 [2024-07-12 17:35:03.568417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.813 [2024-07-12 17:35:03.568588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.813 [2024-07-12 17:35:03.568596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.813 [2024-07-12 17:35:03.568602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.813 [2024-07-12 17:35:03.571278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.813 [2024-07-12 17:35:03.580640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.813 [2024-07-12 17:35:03.581048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.813 [2024-07-12 17:35:03.581063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:44.813 [2024-07-12 17:35:03.581070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:44.813 [2024-07-12 17:35:03.581231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:44.813 [2024-07-12 17:35:03.581399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.813 [2024-07-12 17:35:03.581424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.813 [2024-07-12 17:35:03.581430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.813 [2024-07-12 17:35:03.584138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.072 [2024-07-12 17:35:03.593719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.594095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.594112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.594119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.594291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.594472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.594481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.594487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.597164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.606587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.607039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.607062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.607237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.607420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.607431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.607438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.610256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.619673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.620088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.620103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.620110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.620281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.620458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.620466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.620472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.623208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.632752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.633111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.633126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.633133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.633305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.633483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.633491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.633497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.636169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.645592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.645998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.646012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.646019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.646180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.646341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.646349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.646354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.649047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.658470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.658898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.658912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.658918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.659079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.659240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.659248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.659253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.661938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.671351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.671733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.671748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.671755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.671926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.672100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.672108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.672114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.674791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.684192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.684571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.684587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.684593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.684764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.684935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.684942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.684948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.687625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.697051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.697417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.697460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.697481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.697944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.698105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.698113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.698119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.700708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.709865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.710293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.710307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.710313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.710500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.710671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.710679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.710685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.073 [2024-07-12 17:35:03.713356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.073 [2024-07-12 17:35:03.722880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.073 [2024-07-12 17:35:03.723262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-07-12 17:35:03.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.073 [2024-07-12 17:35:03.723285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.073 [2024-07-12 17:35:03.723461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.073 [2024-07-12 17:35:03.723632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.073 [2024-07-12 17:35:03.723641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.073 [2024-07-12 17:35:03.723647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.726321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.735754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.736128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.736144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.736150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.736321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.736497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.736506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.736516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.739190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.748563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.748862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.748877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.748884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.749055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.749226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.749233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.749240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.751916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.761341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.761731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.761772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.761793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.762285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.762461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.762469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.762475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.765147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.774115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.774595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.774636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.774657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.775185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.775347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.775355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.775361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.778052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.787023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.787431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.787450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.787457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.787634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.787795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.787802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.787807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.790470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.799828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.800176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.800191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.800198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.800368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.800544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.800552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.800558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.803230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.812743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.813216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.813257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.813278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.813868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.814109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.814117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.814123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.816803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.825620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.826063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.826078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.826084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.826245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.826416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.826424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.826430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.829020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.074 [2024-07-12 17:35:03.838489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.074 [2024-07-12 17:35:03.838929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-07-12 17:35:03.838971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.074 [2024-07-12 17:35:03.838992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.074 [2024-07-12 17:35:03.839491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.074 [2024-07-12 17:35:03.839663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.074 [2024-07-12 17:35:03.839671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.074 [2024-07-12 17:35:03.839677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.074 [2024-07-12 17:35:03.842351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.334 [2024-07-12 17:35:03.851594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.334 [2024-07-12 17:35:03.852034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.334 [2024-07-12 17:35:03.852050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.334 [2024-07-12 17:35:03.852057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.334 [2024-07-12 17:35:03.852228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.334 [2024-07-12 17:35:03.852405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.334 [2024-07-12 17:35:03.852414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.334 [2024-07-12 17:35:03.852420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.334 [2024-07-12 17:35:03.855137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.334 [2024-07-12 17:35:03.864515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.334 [2024-07-12 17:35:03.864970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.334 [2024-07-12 17:35:03.864985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.334 [2024-07-12 17:35:03.864992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.334 [2024-07-12 17:35:03.865163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.334 [2024-07-12 17:35:03.865333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.334 [2024-07-12 17:35:03.865340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.334 [2024-07-12 17:35:03.865346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.334 [2024-07-12 17:35:03.868186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.334 [2024-07-12 17:35:03.877615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.334 [2024-07-12 17:35:03.878053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.334 [2024-07-12 17:35:03.878068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.334 [2024-07-12 17:35:03.878075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.334 [2024-07-12 17:35:03.878245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.878421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.878429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.878435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.881171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.890556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.890892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.890907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.890914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.891085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.891256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.891264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.891270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.894010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.903431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.903870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.903911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.903932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.904479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.904652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.904659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.904665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.907336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.916305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.916739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.916754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.916764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.916935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.917107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.917115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.917121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.919804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.929292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.929738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.929760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.929930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.930104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.930112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.930118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.932793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.942104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.942538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.942579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.942601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.943178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.943479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.943488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.943494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.946169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.954981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.955426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.955468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.955489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.956067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.956438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.956450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.956456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.959128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.967791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.968052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.968066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.968073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.968234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.968418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.968426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.335 [2024-07-12 17:35:03.968432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.335 [2024-07-12 17:35:03.971102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.335 [2024-07-12 17:35:03.980643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.335 [2024-07-12 17:35:03.981084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.335 [2024-07-12 17:35:03.981125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.335 [2024-07-12 17:35:03.981147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.335 [2024-07-12 17:35:03.981583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.335 [2024-07-12 17:35:03.981755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.335 [2024-07-12 17:35:03.981763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:03.981769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:03.984474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:03.993431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:03.993893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:03.993934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:03.993955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:03.994470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:03.994642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:03.994649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:03.994656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:03.997328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.006345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.006782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.006817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.006840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.007399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.007587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.007595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.007601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.010275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.019236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.019697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.019713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.019720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.019890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.020061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.020068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.020074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.022759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.032145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.032593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.032608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.032614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.032776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.032936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.032944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.032949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.035620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.044989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.045357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.045372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.045385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.045578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.045748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.045756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.045762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.048439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.057830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.058292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.058333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.058354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.058805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.058977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.058985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.058991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.061665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.070640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.071071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.071101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.071124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.071714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.071972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.071981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.071987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.074660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.083579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.084066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.084108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.336 [2024-07-12 17:35:04.084129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.336 [2024-07-12 17:35:04.084613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.336 [2024-07-12 17:35:04.084867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.336 [2024-07-12 17:35:04.084878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.336 [2024-07-12 17:35:04.084891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.336 [2024-07-12 17:35:04.088937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.336 [2024-07-12 17:35:04.096887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.336 [2024-07-12 17:35:04.097268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.336 [2024-07-12 17:35:04.097284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.337 [2024-07-12 17:35:04.097290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.337 [2024-07-12 17:35:04.097465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.337 [2024-07-12 17:35:04.097637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.337 [2024-07-12 17:35:04.097645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.337 [2024-07-12 17:35:04.097651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.337 [2024-07-12 17:35:04.100357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.337 [2024-07-12 17:35:04.110034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.337 [2024-07-12 17:35:04.110472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.337 [2024-07-12 17:35:04.110488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.337 [2024-07-12 17:35:04.110494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.337 [2024-07-12 17:35:04.110665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.337 [2024-07-12 17:35:04.110836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.337 [2024-07-12 17:35:04.110844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.337 [2024-07-12 17:35:04.110850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.113600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.122906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.123316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.123331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.123339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.123520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.123697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.123705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.123711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.126540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.135974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.136414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.136439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.136446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.136630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.136801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.136809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.136814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.139560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.148937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.149357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.149373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.149386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.149558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.149729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.149736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.149742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.152422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.161842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.162281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.162322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.162342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.162785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.162958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.162966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.162972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.165647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.174762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.175198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.175239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.175260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.175817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.176075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.176086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.176095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.180144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.188194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.188635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.188651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.188657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.188828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.188998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.189006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.189011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.191727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.200995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.201426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.201441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.201448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.201619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.201790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.201798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.201804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.204480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.213878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.214296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.214338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.214359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.214918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.215091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.215099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.215105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.217781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.226746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.227174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.597 [2024-07-12 17:35:04.227189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.597 [2024-07-12 17:35:04.227195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.597 [2024-07-12 17:35:04.227366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.597 [2024-07-12 17:35:04.227543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.597 [2024-07-12 17:35:04.227552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.597 [2024-07-12 17:35:04.227558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.597 [2024-07-12 17:35:04.230264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.597 [2024-07-12 17:35:04.239534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.597 [2024-07-12 17:35:04.239962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.239977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.239983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.240154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.240325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.240333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.240339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.243020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.252393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.252808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.252849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.252870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.253278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.253447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.253455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.253461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.256044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.265207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.265552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.265568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.265577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.265748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.265919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.265927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.265933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.268610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.278032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.278464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.278507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.278528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.279106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.279439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.279448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.279454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.282274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.291110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.291531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.291547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.291554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.291730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.291908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.291916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.291922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.294743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.304236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.304659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.304675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.304682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.304858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.305034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.305045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.305051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.307841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.317293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.317621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.317663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.317686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.318202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.318384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.318393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.318399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.321185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.330317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.330676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.330692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.330699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.330874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.331052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.331060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.331066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.333857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.343444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.343837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.343853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.343860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.344036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.344212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.344220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.344226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.347060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.356277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.356738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.356753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.356759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.356931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.357101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.357109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.357114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.359797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.598 [2024-07-12 17:35:04.369091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.598 [2024-07-12 17:35:04.369525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.598 [2024-07-12 17:35:04.369568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.598 [2024-07-12 17:35:04.369590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.598 [2024-07-12 17:35:04.370167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.598 [2024-07-12 17:35:04.370667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.598 [2024-07-12 17:35:04.370676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.598 [2024-07-12 17:35:04.370682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.598 [2024-07-12 17:35:04.373398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.382268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.382664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.382707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.382728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.383307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.383785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.383793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.383799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.386543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.395318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.395732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.395774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.395795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.396397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.396767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.396775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.396781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.399524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.408241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.408593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.408608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.408615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.408786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.408956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.408964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.408970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.411653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.421095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.421567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.421609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.421630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.422207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.422725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.422734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.422740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.425422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.433948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.434405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.434421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.434428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.434598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.434769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.434777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.434786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.437466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.446856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.447313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.447329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.447335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.447511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.447682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.447690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.447695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.450407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.459686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.460130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.460145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.460152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.460323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.460500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.460508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.460514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.463188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.472500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.472873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.472913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.472936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.473463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.473635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.473643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.473648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.476325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.485315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.485696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.485745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.485767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.486344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.486927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.486935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.486941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.489624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.498249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.498645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.498686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.498708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.499283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.499544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.499556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.499565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.503623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.511563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.511879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.511895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.511901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.512071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.512242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.512250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.512256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.514977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.524354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.524742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.524758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.524764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.524935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.525109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.525118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.525125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.527807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 [2024-07-12 17:35:04.537243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 [2024-07-12 17:35:04.537626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.537641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.537648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 [2024-07-12 17:35:04.537818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.537989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.537997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.538003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 [2024-07-12 17:35:04.540684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 24610 Killed "${NVMF_APP[@]}" "$@" 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=25990 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 25990 00:26:45.859 [2024-07-12 17:35:04.550355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 25990 ']' 00:26:45.859 [2024-07-12 17:35:04.550805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-07-12 17:35:04.550821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.859 [2024-07-12 17:35:04.550828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.859 [2024-07-12 17:35:04.551004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.859 [2024-07-12 17:35:04.551182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.859 [2024-07-12 17:35:04.551191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.859 [2024-07-12 17:35:04.551197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.859 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.860 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.860 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.860 17:35:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.860 [2024-07-12 17:35:04.554025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 [2024-07-12 17:35:04.563546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.563848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.563863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.563870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.564045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.564221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.564228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.564234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.567064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 [2024-07-12 17:35:04.576590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.576952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.576968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.576975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.577151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.577328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.577336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.577342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.580170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 [2024-07-12 17:35:04.589677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.590002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.590018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.590025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.590202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.590385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.590393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.590400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.593190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 [2024-07-12 17:35:04.600748] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:26:45.860 [2024-07-12 17:35:04.600786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.860 [2024-07-12 17:35:04.602698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.603112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.603128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.603135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.603312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.603494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.603503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.603509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.606333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 [2024-07-12 17:35:04.615647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.616090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.616108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.616115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.616291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.616473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.616482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.616488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.619314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.860 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.860 [2024-07-12 17:35:04.628603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.860 [2024-07-12 17:35:04.629020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-07-12 17:35:04.629036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:45.860 [2024-07-12 17:35:04.629043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:45.860 [2024-07-12 17:35:04.629220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:45.860 [2024-07-12 17:35:04.629402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.860 [2024-07-12 17:35:04.629410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.860 [2024-07-12 17:35:04.629417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.860 [2024-07-12 17:35:04.632245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.120 [2024-07-12 17:35:04.641755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.120 [2024-07-12 17:35:04.642179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-12 17:35:04.642195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.120 [2024-07-12 17:35:04.642202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.120 [2024-07-12 17:35:04.642384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.120 [2024-07-12 17:35:04.642560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.120 [2024-07-12 17:35:04.642568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.120 [2024-07-12 17:35:04.642575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.120 [2024-07-12 17:35:04.645401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.120 [2024-07-12 17:35:04.654911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.120 [2024-07-12 17:35:04.655350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-12 17:35:04.655366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.120 [2024-07-12 17:35:04.655373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.120 [2024-07-12 17:35:04.655554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.120 [2024-07-12 17:35:04.655731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.120 [2024-07-12 17:35:04.655739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.120 [2024-07-12 17:35:04.655745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.120 [2024-07-12 17:35:04.658071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:46.120 [2024-07-12 17:35:04.658586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.120 [2024-07-12 17:35:04.667886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.120 [2024-07-12 17:35:04.668361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.120 [2024-07-12 17:35:04.668384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.120 [2024-07-12 17:35:04.668392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.668569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.668746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.668754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.668761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.671560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.680870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.681243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.681259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.681269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.681453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.681631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.681638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.681645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.684449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.693911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.694401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.694418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.694425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.694601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.694777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.694785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.694791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.697591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.706893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.707361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.707387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.707396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.707574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.707751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.707759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.707766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.710578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.719867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.720332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.720348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.720355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.720537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.720714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.720730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.720737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.723529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.733045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.733465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.733482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.733489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.733665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.733842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.733850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.733856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.736686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.739839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.121 [2024-07-12 17:35:04.739864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.121 [2024-07-12 17:35:04.739871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.121 [2024-07-12 17:35:04.739877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.121 [2024-07-12 17:35:04.739883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.121 [2024-07-12 17:35:04.739920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.121 [2024-07-12 17:35:04.740025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.121 [2024-07-12 17:35:04.740026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.121 [2024-07-12 17:35:04.746205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.746647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.746666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.746673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.746851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.747028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.747036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.747043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.749872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.759382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.759839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.759857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.759870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.760048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.760224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.760232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.760239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.763062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.772564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.773006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.773024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.773032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.773209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.773395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.773403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.773411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.776234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.785760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.786224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.786243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.121 [2024-07-12 17:35:04.786251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.121 [2024-07-12 17:35:04.786434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.121 [2024-07-12 17:35:04.786611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.121 [2024-07-12 17:35:04.786619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.121 [2024-07-12 17:35:04.786626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.121 [2024-07-12 17:35:04.789448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.121 [2024-07-12 17:35:04.798939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.121 [2024-07-12 17:35:04.799358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.121 [2024-07-12 17:35:04.799376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.799387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.799564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.799741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.799749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.799761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.802581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.812069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.812490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.812506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.812514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.812690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.812868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.812876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.812882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.815704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.825192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.825620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.825636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.825643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.825820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.825996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.826004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.826011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.828829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.838318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.838725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.838741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.838748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.838925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.839101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.839109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.839116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.841936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.851430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.851850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.851865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.851871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.852047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.852225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.852233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.852239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.855065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.864569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.865025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.865041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.865048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.865224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.865406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.865414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.865421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.868240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.877736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.878155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.878171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.878177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.878353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.878534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.878543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.878549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.881368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.122 [2024-07-12 17:35:04.890871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.122 [2024-07-12 17:35:04.891300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.122 [2024-07-12 17:35:04.891315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.122 [2024-07-12 17:35:04.891322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.122 [2024-07-12 17:35:04.891507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.122 [2024-07-12 17:35:04.891683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.122 [2024-07-12 17:35:04.891691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.122 [2024-07-12 17:35:04.891697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.122 [2024-07-12 17:35:04.894521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.904011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.904439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.904455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.904462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.904638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.904812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.904821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.904826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.907651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.917143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.917506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.917522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.917529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.917705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.917881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.917889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.917895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.920718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.930205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.930610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.930625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.930632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.930808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.930984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.930991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.931001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.933823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.943353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.943758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.943775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.943782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.943959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.944135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.944143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.944149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.946972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.956459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.956884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.956899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.956906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.957083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.957260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.957268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.957275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.960095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.969581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.969997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.970013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.970020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.970197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.970372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.970386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.970392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.973208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.982706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.983131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.983150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.983156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.983332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.983512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.983520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.983527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.986354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:04.995848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:04.996267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:04.996283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:04.996290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:04.996470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.383 [2024-07-12 17:35:04.996647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.383 [2024-07-12 17:35:04.996654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.383 [2024-07-12 17:35:04.996660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.383 [2024-07-12 17:35:04.999481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.383 [2024-07-12 17:35:05.008970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.383 [2024-07-12 17:35:05.009391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-07-12 17:35:05.009407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.383 [2024-07-12 17:35:05.009414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.383 [2024-07-12 17:35:05.009591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.009767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.009775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.009781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.012605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.022088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.022519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.022534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.022541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.022718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.022898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.022906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.022912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.025729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.035212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.035641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.035656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.035663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.035839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.036016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.036024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.036031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.038853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.048345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.048816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.048832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.048838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.049014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.049190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.049198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.049204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.052025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.061530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.061901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.061916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.061923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.062099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.062276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.062284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.062290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.065110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.074620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.074989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.075005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.075011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.075187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.075364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.075372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.075383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.078202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.087707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.088056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.088072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.088079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.088255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.088437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.088446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.088452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.091275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.100765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.101184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.101200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.101206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.101387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.101564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.101572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.101578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.104400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.113885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.114307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.114323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.114334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.114516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.114692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.114700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.114706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.117529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.127032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.127449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.127466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.127473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.127649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.127826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.127833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.127839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.130663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.140164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.140606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.140622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.384 [2024-07-12 17:35:05.140629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.384 [2024-07-12 17:35:05.140805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.384 [2024-07-12 17:35:05.140981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.384 [2024-07-12 17:35:05.140990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.384 [2024-07-12 17:35:05.140996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.384 [2024-07-12 17:35:05.143820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.384 [2024-07-12 17:35:05.153323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.384 [2024-07-12 17:35:05.153752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-07-12 17:35:05.153767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.385 [2024-07-12 17:35:05.153774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.385 [2024-07-12 17:35:05.153950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.385 [2024-07-12 17:35:05.154127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.385 [2024-07-12 17:35:05.154140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.385 [2024-07-12 17:35:05.154148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.385 [2024-07-12 17:35:05.156973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.645 [2024-07-12 17:35:05.166474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.645 [2024-07-12 17:35:05.166879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.645 [2024-07-12 17:35:05.166893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.645 [2024-07-12 17:35:05.166900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.645 [2024-07-12 17:35:05.167076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.645 [2024-07-12 17:35:05.167252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.645 [2024-07-12 17:35:05.167260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.645 [2024-07-12 17:35:05.167267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.645 [2024-07-12 17:35:05.170095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.645 [2024-07-12 17:35:05.179619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.645 [2024-07-12 17:35:05.179955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.645 [2024-07-12 17:35:05.179971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.645 [2024-07-12 17:35:05.179978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.645 [2024-07-12 17:35:05.180154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.645 [2024-07-12 17:35:05.180331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.645 [2024-07-12 17:35:05.180339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.645 [2024-07-12 17:35:05.180345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.645 [2024-07-12 17:35:05.183177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.645 [2024-07-12 17:35:05.192675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.645 [2024-07-12 17:35:05.192992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.645 [2024-07-12 17:35:05.193008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.645 [2024-07-12 17:35:05.193015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.645 [2024-07-12 17:35:05.193191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.645 [2024-07-12 17:35:05.193368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.645 [2024-07-12 17:35:05.193376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.645 [2024-07-12 17:35:05.193391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.645 [2024-07-12 17:35:05.196212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.645 [2024-07-12 17:35:05.205705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.645 [2024-07-12 17:35:05.206152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.206166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.206173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.206349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.206531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.206540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.206547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.209361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.218857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.219286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.219302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.219308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.219489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.219665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.219672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.219679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.222501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.231990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.232353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.232369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.232376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.232556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.232732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.232741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.232747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.235565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.245055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.245473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.245489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.245495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.245674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.245852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.245860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.245866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.248688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.258188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.258603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.258619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.258626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.258803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.258980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.258988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.258994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.261815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.271312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.271662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.271678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.271685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.271860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.272037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.272045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.272051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.274878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.284380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.284733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.284749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.284755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.284931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.285107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.285115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.285125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.287952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.297448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.297882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.297898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.297905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.298081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.298258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.298266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.298272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.301093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.310591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.311022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.311037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.311044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.311220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.311401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.311409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.311415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.314235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.323735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.324139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.324154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.324161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.324337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.324519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.324528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.324534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.327357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.337035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.337472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.646 [2024-07-12 17:35:05.337492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.646 [2024-07-12 17:35:05.337499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.646 [2024-07-12 17:35:05.337676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.646 [2024-07-12 17:35:05.337853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.646 [2024-07-12 17:35:05.337861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.646 [2024-07-12 17:35:05.337867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.646 [2024-07-12 17:35:05.340694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.646 [2024-07-12 17:35:05.350186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.646 [2024-07-12 17:35:05.350528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.350545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.350552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.350728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.350907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.350916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.350923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.353743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.647 [2024-07-12 17:35:05.363244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.647 [2024-07-12 17:35:05.363670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.363686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.363693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.363868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.364045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.364054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.364060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.366883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.647 [2024-07-12 17:35:05.376375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.647 [2024-07-12 17:35:05.376798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.376814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.376821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.376998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.377180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.377189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.377194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.380017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.647 [2024-07-12 17:35:05.389511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.647 [2024-07-12 17:35:05.389868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.389884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.389891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.390067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.390243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.390251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.390257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.393079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.647 [2024-07-12 17:35:05.402579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.647 [2024-07-12 17:35:05.403025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.403041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.403048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.403223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.403406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.403414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.403420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.406241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.647 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.647 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:46.647 17:35:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.647 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.647 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.647 [2024-07-12 17:35:05.415729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.647 [2024-07-12 17:35:05.416198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.647 [2024-07-12 17:35:05.416214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.647 [2024-07-12 17:35:05.416221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.647 [2024-07-12 17:35:05.416402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.647 [2024-07-12 17:35:05.416581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.647 [2024-07-12 17:35:05.416593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.647 [2024-07-12 17:35:05.416599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.647 [2024-07-12 17:35:05.419424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.907 [2024-07-12 17:35:05.428769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.907 [2024-07-12 17:35:05.429140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-07-12 17:35:05.429156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.907 [2024-07-12 17:35:05.429163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.907 [2024-07-12 17:35:05.429339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.907 [2024-07-12 17:35:05.429519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.907 [2024-07-12 17:35:05.429528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.907 [2024-07-12 17:35:05.429534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.907 [2024-07-12 17:35:05.432360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.907 [2024-07-12 17:35:05.441864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.907 [2024-07-12 17:35:05.442166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-07-12 17:35:05.442182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.907 [2024-07-12 17:35:05.442190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.907 [2024-07-12 17:35:05.442366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.907 [2024-07-12 17:35:05.442550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.907 [2024-07-12 17:35:05.442559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.907 [2024-07-12 17:35:05.442565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.907 [2024-07-12 17:35:05.445386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.907 [2024-07-12 17:35:05.452522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.907 [2024-07-12 17:35:05.455050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.907 [2024-07-12 17:35:05.455470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-07-12 17:35:05.455485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.907 [2024-07-12 17:35:05.455493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.907 [2024-07-12 17:35:05.455669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.907 [2024-07-12 17:35:05.455849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.907 [2024-07-12 17:35:05.455857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.907 [2024-07-12 17:35:05.455863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.907 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.907 [2024-07-12 17:35:05.458689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.907 [2024-07-12 17:35:05.468189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.907 [2024-07-12 17:35:05.468613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-07-12 17:35:05.468629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.907 [2024-07-12 17:35:05.468636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.908 [2024-07-12 17:35:05.468812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.908 [2024-07-12 17:35:05.468989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.908 [2024-07-12 17:35:05.468997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.908 [2024-07-12 17:35:05.469003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.908 [2024-07-12 17:35:05.471828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.908 [2024-07-12 17:35:05.481340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.908 [2024-07-12 17:35:05.481806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-07-12 17:35:05.481823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.908 [2024-07-12 17:35:05.481830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.908 [2024-07-12 17:35:05.482007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.908 [2024-07-12 17:35:05.482184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.908 [2024-07-12 17:35:05.482193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.908 [2024-07-12 17:35:05.482199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.908 [2024-07-12 17:35:05.485036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.908 Malloc0 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.908 [2024-07-12 17:35:05.494529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.908 [2024-07-12 17:35:05.494978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-07-12 17:35:05.494994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.908 [2024-07-12 17:35:05.495005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.908 [2024-07-12 17:35:05.495181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.908 [2024-07-12 17:35:05.495357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.908 [2024-07-12 17:35:05.495365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.908 [2024-07-12 17:35:05.495371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.908 [2024-07-12 17:35:05.498192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.908 [2024-07-12 17:35:05.507686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.908 [2024-07-12 17:35:05.508134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-07-12 17:35:05.508149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ca980 with addr=10.0.0.2, port=4420 00:26:46.908 [2024-07-12 17:35:05.508156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca980 is same with the state(5) to be set 00:26:46.908 [2024-07-12 17:35:05.508332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ca980 (9): Bad file descriptor 00:26:46.908 [2024-07-12 17:35:05.508513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:46.908 [2024-07-12 17:35:05.508522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:46.908 [2024-07-12 17:35:05.508528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:46.908 [2024-07-12 17:35:05.509405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.908 [2024-07-12 17:35:05.511351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.908 17:35:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 24940 00:26:46.908 [2024-07-12 17:35:05.520850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:46.908 [2024-07-12 17:35:05.591586] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:56.889 00:26:56.889 Latency(us) 00:26:56.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:56.889 Verification LBA range: start 0x0 length 0x4000 00:26:56.889 Nvme1n1 : 15.00 8002.54 31.26 12810.69 0.00 6130.26 598.37 21085.50 00:26:56.889 =================================================================================================================== 00:26:56.889 Total : 8002.54 31.26 12810.69 0.00 6130.26 598.37 21085.50 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.889 rmmod nvme_tcp 00:26:56.889 rmmod nvme_fabrics 00:26:56.889 rmmod nvme_keyring 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 25990 ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 25990 ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 25990' 00:26:56.889 killing process with pid 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 25990 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.889 17:35:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.866 17:35:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:57.866 00:26:57.866 real 0m25.834s 00:26:57.866 user 1m2.550s 00:26:57.866 sys 0m6.053s 00:26:57.866 17:35:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.866 17:35:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.866 ************************************ 00:26:57.866 END TEST nvmf_bdevperf 00:26:57.866 ************************************ 00:26:57.866 17:35:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:57.866 17:35:16 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:57.866 17:35:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:57.866 17:35:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.866 17:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.866 ************************************ 00:26:57.866 START TEST nvmf_target_disconnect 00:26:57.866 ************************************ 00:26:57.866 17:35:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:58.125 * Looking for test storage... 00:26:58.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:58.125 17:35:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:03.402 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:03.402 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:03.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:03.403 Found net devices under 0000:86:00.0: cvl_0_0 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:03.403 Found net devices under 0000:86:00.1: cvl_0_1 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:03.403 17:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:03.403 00:27:03.403 --- 10.0.0.2 ping statistics --- 00:27:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.403 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:03.403 00:27:03.403 --- 10.0.0.1 ping statistics --- 00:27:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.403 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:03.403 ************************************ 00:27:03.403 START TEST nvmf_target_disconnect_tc1 00:27:03.403 ************************************ 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:03.403 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:03.663 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.663 [2024-07-12 17:35:22.231684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.663 [2024-07-12 17:35:22.231723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897e60 with addr=10.0.0.2, port=4420 00:27:03.663 [2024-07-12 17:35:22.231759] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:03.663 [2024-07-12 17:35:22.231769] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:03.663 [2024-07-12 17:35:22.231775] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:03.663 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:03.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:03.663 Initializing NVMe Controllers 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:03.663 00:27:03.663 real 0m0.099s 00:27:03.663 user 0m0.042s 00:27:03.663 sys 0m0.057s 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:03.663 ************************************ 00:27:03.663 END TEST nvmf_target_disconnect_tc1 00:27:03.663 ************************************ 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:03.663 ************************************ 00:27:03.663 START TEST nvmf_target_disconnect_tc2 00:27:03.663 ************************************ 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=31024 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 31024 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 31024 ']' 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.663 17:35:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:03.663 [2024-07-12 17:35:22.347440] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:27:03.663 [2024-07-12 17:35:22.347477] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.663 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.663 [2024-07-12 17:35:22.419181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.922 [2024-07-12 17:35:22.499525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.922 [2024-07-12 17:35:22.499564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.922 [2024-07-12 17:35:22.499572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.922 [2024-07-12 17:35:22.499578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.922 [2024-07-12 17:35:22.499582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.922 [2024-07-12 17:35:22.499690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:03.922 [2024-07-12 17:35:22.499841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:03.922 [2024-07-12 17:35:22.499948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:03.922 [2024-07-12 17:35:22.499949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.499 Malloc0 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:04.499 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.500 [2024-07-12 17:35:23.229103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.500 [2024-07-12 17:35:23.254090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=31226 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:04.500 17:35:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.758 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.681 17:35:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 31024 00:27:06.681 17:35:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:06.681 Read completed with error (sct=0, sc=8) 00:27:06.681 starting I/O failed 00:27:06.681 Read completed with error (sct=0, sc=8) 00:27:06.681 starting I/O failed 00:27:06.681 Read completed with error (sct=0, sc=8) 00:27:06.681 starting I/O failed 00:27:06.681 Read completed with error (sct=0, sc=8) 00:27:06.681 starting I/O failed 00:27:06.681 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 [2024-07-12 17:35:25.281356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 [2024-07-12 17:35:25.281557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 [2024-07-12 17:35:25.281752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Read completed with error (sct=0, sc=8) 00:27:06.682 starting I/O failed 00:27:06.682 Write completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 Read completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 Read completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 Write completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 Read completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 Write completed with error (sct=0, sc=8) 00:27:06.683 starting I/O failed 00:27:06.683 [2024-07-12 17:35:25.281949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.683 [2024-07-12 17:35:25.282163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.282369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.282488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.282743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.282856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.282975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.282986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.283849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.283879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.284138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.284167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.284311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.284341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.284468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.284498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.284700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.284730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.284936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.284946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.285954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.285964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.286935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.286945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.287204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.287233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.287432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.287463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.287591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.287621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.287820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.287831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.287983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.287992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.288091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.288100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.683 [2024-07-12 17:35:25.288191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.683 [2024-07-12 17:35:25.288201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.683 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.288367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.288381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.288471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.288589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.288598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.288695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.288705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.288864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.288874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.289969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.289979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.290875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.290885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.291926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.291936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.292115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.292145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.292352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.292390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.292639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.292669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.292865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.292875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.293858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.293998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.684 qpair failed and we were unable to recover it. 00:27:06.684 [2024-07-12 17:35:25.294981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.684 [2024-07-12 17:35:25.294991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.295244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.295273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.295410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.295442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.295559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.295791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.295821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.296080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.296109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.296248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.296278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.296407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.296438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.296576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.296605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.296815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.296845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.297842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.297850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.298847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.298856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.299902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.299912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.300850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.300860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.301004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.301014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.301178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.301188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.301395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.301489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.685 [2024-07-12 17:35:25.301498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.685 qpair failed and we were unable to recover it. 00:27:06.685 [2024-07-12 17:35:25.301559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.301568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.301645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.301654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.301794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.301802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.301957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.302887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.302896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.303830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.303839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.304977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.304987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.305879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.305888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.306111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.306140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.306287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.306321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.306684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.306717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.686 [2024-07-12 17:35:25.306850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.686 [2024-07-12 17:35:25.306879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.686 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.307142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.307171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.307301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.307331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.307626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.307657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.307776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.307805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.308058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.308087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.308308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.308338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.308545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.308576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.308780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.308809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.309025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.309055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.309315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.309344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.309569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.309600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.309876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.309906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.310157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.310186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.310391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.310422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.310670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.310680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.310925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.310935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.311178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.311331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.311485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.311676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.311839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.311994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.312003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.312169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.312198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.312397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.312427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.312796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.312844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.313061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.313092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.313246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.313276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.313556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.313789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.313819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.314014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.314196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.314368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.314603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.314711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.314980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.315011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.315155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.315445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.315476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.315681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.315720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.315883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.315897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.316046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.687 [2024-07-12 17:35:25.316059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.687 qpair failed and we were unable to recover it. 00:27:06.687 [2024-07-12 17:35:25.316249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.316262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.316371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.316389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.316544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.316558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.316714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.316728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.316824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.316836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.317972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.318141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.318150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.318427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.318458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.318595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.318624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.318769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.318799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.318980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.318990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.319132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.319167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.319402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.319433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.319571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.319600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.319831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.319861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.320083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.320311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.320341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.320488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.320519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.320820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.320888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.321104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.321137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.321408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.321441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.321724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.321753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.321940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.321968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.322854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.322867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.323028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.323041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.323213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.323227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.323389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.323419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.323609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.323638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.323780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.323809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.324012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.688 [2024-07-12 17:35:25.324041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.688 qpair failed and we were unable to recover it. 00:27:06.688 [2024-07-12 17:35:25.324237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.324266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.324467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.324497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.324632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.324661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.324887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.324916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.325162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.325175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.325389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.325403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.325490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.325502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.325668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.325681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.325887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.325900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.326065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.326078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.326237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.326271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.326529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.326559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.326671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.326700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.326957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.326971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.327925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.327937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.328166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.328328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.328453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.328690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.328888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.328988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.329851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.329864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.330768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.330784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.331032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.331062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.331267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.331296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.331410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.331441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.689 qpair failed and we were unable to recover it. 00:27:06.689 [2024-07-12 17:35:25.331565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.689 [2024-07-12 17:35:25.331593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.331840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.331853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.331964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.331977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.332941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.332955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.333055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.333069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.333166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.333180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.333420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.333626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.333658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.333859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.334102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.334132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.334345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.334374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.334568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.334581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.334729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.334743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.334874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.334887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.335100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.335113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.335475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.335492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.335707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.335722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.335835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.335848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.335959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.335976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.336155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.336168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.336278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.336291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.336485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.336500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.336738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.336752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.336855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.336868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.337084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.337098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.337182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.337195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.337412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.337426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.337544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.337558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.337780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.337810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.338003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.690 [2024-07-12 17:35:25.338032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-12 17:35:25.338231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.338260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.338490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.338520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.338707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.338738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.338926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.338939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.339178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.339192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.339398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.339428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.339647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.339676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.339879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.339908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.340926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.340940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.341957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.341986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.342202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.342216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.342408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.342422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.342575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.342588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.342838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.342868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.343116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.343146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.343328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.343357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.343596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.343628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.343765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.343795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.343995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.344008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.344193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.344225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.344358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.344373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.344633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.344664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.344882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.344911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.345056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.345085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.345347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.345384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.345604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.345633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.345903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.345932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.346135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.346165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.346368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.346407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.346613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.346642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.691 [2024-07-12 17:35:25.346857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.691 [2024-07-12 17:35:25.346885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.691 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.347110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.347123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.347274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.347292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.347443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.347457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.347675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.347688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.347858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.347871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.348105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.348134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.348400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.348430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.348542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.348571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.348767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.348956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.348969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.349155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.349185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.349371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.349408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.349595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.349624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.349802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.349815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.349984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.350013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.350166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.350195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.350326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.350355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.350638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.350668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.350865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.351026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.351055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.351248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.351277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.351469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.351500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.351767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.351797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.352967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.353180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.353192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.353305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.353315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.353433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.353445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.353630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.353639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.353851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.354914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.354924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.692 qpair failed and we were unable to recover it. 00:27:06.692 [2024-07-12 17:35:25.355134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.692 [2024-07-12 17:35:25.355144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.355305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.355315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.355479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.355511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.355744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.355773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.356062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.356091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.356292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.356323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.356521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.356551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.356782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.356792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.356883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.356894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.357967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.357997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.358296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.358326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.358535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.358565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.358685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.358715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.358965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.358994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.359188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.359217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.359414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.359444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.359563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.359593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.359747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.359757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.359984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.360014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.360132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.360162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.360439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.360469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.360691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.360950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.360964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.361860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.361869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.362021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.362031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.362124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.362133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.362402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.362432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.362557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.362587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.362770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.362799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.363006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.693 [2024-07-12 17:35:25.363036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.693 qpair failed and we were unable to recover it. 00:27:06.693 [2024-07-12 17:35:25.363168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.363198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.363429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.363461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.363597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.363626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.363764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.363793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.363989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.364031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.364178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.364187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.364404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.364435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.364618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.364647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.364898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.364928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.365965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.365975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.366961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.366971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.367111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.367121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.367278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.367289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.367530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.367540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.367775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.367785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.367933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.367944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.368907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.368999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.369008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.694 qpair failed and we were unable to recover it. 00:27:06.694 [2024-07-12 17:35:25.369100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.694 [2024-07-12 17:35:25.369109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.369980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.369990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.370908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.370917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.371916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.371926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.372941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.372950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.695 qpair failed and we were unable to recover it. 00:27:06.695 [2024-07-12 17:35:25.373786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.695 [2024-07-12 17:35:25.373796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.373948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.373957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.374155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.374164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.374398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.374428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.374612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.374641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.374862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.374891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.375102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.375133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.375319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.375348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.375691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.375899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.375931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.376157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.376188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.376327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.376357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.376551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.376582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.376830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.376860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.376990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.377020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.377203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.377232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.377438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.377469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.377680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.377694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.377868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.377897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.378169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.378199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.378460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.378490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.378688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.378701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.378852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.378896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.379146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.379175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.379303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.379332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.379455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.379485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.379775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.379804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.379952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.379992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.380153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.380167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.380348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.380383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.380565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed000 is same with the state(5) to be set 00:27:06.696 [2024-07-12 17:35:25.380953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.381022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.381253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.381286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.381480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.381511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.381728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.381738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.381887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.381896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.696 [2024-07-12 17:35:25.382817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.696 [2024-07-12 17:35:25.382827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.696 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.382913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.382923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.383084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.383094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.383258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.383287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.383462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.383714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.383790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.383959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.383976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.384209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.384240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.384397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.384430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.384686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.384716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.384900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.384913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.385090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.385121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.385335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.385364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.385632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.385663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.385864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.385893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.386892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.386904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.387964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.387974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.388141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.388310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.388346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.388500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.388530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.388672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.388701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.388899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.388929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.389110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.389139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.389324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.389353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.389572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.389603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.389798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.389828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.389959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.389989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.390262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.390271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.390426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.390436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.697 [2024-07-12 17:35:25.390656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.697 [2024-07-12 17:35:25.390665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.697 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.390743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.390752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.390906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.390917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.391073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.391270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.391503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.391519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.391682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.391695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.391851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.391882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.392011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.392040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.392179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.392208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.392483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.392514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.392772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.392801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.392923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.392953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.393140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.393153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.393258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.393272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.393431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.393445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.393593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.393606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.393826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.393855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.394062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.394092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.394224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.394253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.394438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.394468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.394596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.394626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.394807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.394836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.395031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.395060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.395238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.395251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.395459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.395473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.395722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.395751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.395977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.396149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.396413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.396596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.396963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.396992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.397204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.397234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.397356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.397396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.397595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.397908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.398128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.398142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.398300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.398313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.398394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.398407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.398571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.698 [2024-07-12 17:35:25.398584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.698 qpair failed and we were unable to recover it. 00:27:06.698 [2024-07-12 17:35:25.398738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.398751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.398944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.398957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.399973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.399986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.400957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.400970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.401858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.401871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.402051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.402064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.402255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.402268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.402460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.402474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.402654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.402683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.402935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.402965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.403181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.403211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.403443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.403474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.403724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.403753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.403959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.403988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.404180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.404282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.404295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.404398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.404411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.699 [2024-07-12 17:35:25.404557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.699 [2024-07-12 17:35:25.404571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.699 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.404739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.404753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.404899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.404912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.405075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.405088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.405305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.405334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.405617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.405647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.405782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.405811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.406080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.406109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.406244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.406273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.406404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.406440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.406666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.406695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.406881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.407039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.407052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.407230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.407258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.407468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.407498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.407624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.407653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.407788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.408820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.408863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.409144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.409173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.409400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.409430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.409629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.409658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.409774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.409802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.410946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.411932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.411945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.412201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.700 [2024-07-12 17:35:25.412214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.700 qpair failed and we were unable to recover it. 00:27:06.700 [2024-07-12 17:35:25.412359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.412372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.412462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.412475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.412640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.412654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.412750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.412763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.412847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.412860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.413956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.413969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.414956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.414969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.415869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.415882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.416856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.416990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.417019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.417217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.417246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.417495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.417525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.417785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.417815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.417958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.417972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.418195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.418224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.418419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.418450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.418701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.418730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.701 [2024-07-12 17:35:25.418865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.701 [2024-07-12 17:35:25.418878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.701 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.419906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.419920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.420015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.420027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.420250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.420279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.420532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.420567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.420818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.420847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.421030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.421059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.421258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.421287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.421481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.421510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.421708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.421737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.421868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.421881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.422043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.422056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.422219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.422232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.422421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.422452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.422579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.422608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.422858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.422887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.423086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.423200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.423212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.423434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.423448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.423666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.423680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.423841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.423854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.424888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.424901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.425060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.425074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.425233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.425247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.425419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.425433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.425587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.425600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.425841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.425854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.426014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.426027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.426243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.426256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.702 qpair failed and we were unable to recover it. 00:27:06.702 [2024-07-12 17:35:25.426358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.702 [2024-07-12 17:35:25.426371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.426517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.426530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.426779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.426792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.426888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.426900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.427874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.427890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.428930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.428942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.429961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.429974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.430972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.430985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.431140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.431153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.431254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.431267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.431373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.703 [2024-07-12 17:35:25.431401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.703 qpair failed and we were unable to recover it. 00:27:06.703 [2024-07-12 17:35:25.431570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.431583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.431675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.431687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.431781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.431794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.431989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.432887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.432900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.433986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.433999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.434194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.434207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.434487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.434516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.434669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.434697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.434883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.434913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.435090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.435119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.435301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.435330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.435480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.435510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.435640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.435669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.435809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.435838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.436062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.436091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.436229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.436258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.436447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.436476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.436614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.436643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.436839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.436881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.437789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.437802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.438075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.438088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.438191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.438204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.438367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.438385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.438534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.704 [2024-07-12 17:35:25.438548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.704 qpair failed and we were unable to recover it. 00:27:06.704 [2024-07-12 17:35:25.438655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.438669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.438832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.438848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.438936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.438950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.439972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.439985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.440922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.440935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.441103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.441133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.441273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.441302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.441489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.441519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:06.705 [2024-07-12 17:35:25.441639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.705 [2024-07-12 17:35:25.441668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:06.705 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.441806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.441836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.441956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.442164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.442178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.442343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.442357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.442528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.442542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.442648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.442661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.442826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.442840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.443877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.443907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.444895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.444908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.445006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.445019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.445247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.445260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.445447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.445481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.445667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.445682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.445858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.445872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.446094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.446124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.446317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.446346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.446495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.446527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.446659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.446688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.446893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.446922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.447193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.447222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.447349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.447390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.447593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.447623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.447772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.447800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.448070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.448083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.030 qpair failed and we were unable to recover it. 00:27:07.030 [2024-07-12 17:35:25.448180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.030 [2024-07-12 17:35:25.448194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.448344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.448357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.448489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.448503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.448689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.448702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.448922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.448935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.449762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.449774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.450901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.450915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.451094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.451107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.451264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.451277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.451567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.451597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.451783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.451813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.451937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.451950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.452036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.452050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.452197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.452210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.452370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.452415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.452542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.452571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.452809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.452844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.453915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.453928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.454984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.454997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.031 [2024-07-12 17:35:25.455096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.031 [2024-07-12 17:35:25.455109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.031 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.455272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.455286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.455471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.455502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.455690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.455720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.455908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.455936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.456150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.456163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.456389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.456403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.456591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.456604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.456768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.456797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.456997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.457026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.457170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.457199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.457343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.457357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.457606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.457636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.457835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.457864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.458860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.458873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.459145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.459158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.459440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.459453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.459548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.459562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.459732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.459745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.459898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.459911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.460158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.460187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.460396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.460426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.460542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.460571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.460803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.460833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.460980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.461270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.461456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.461643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.461755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.461943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.461956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.032 [2024-07-12 17:35:25.462749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.032 qpair failed and we were unable to recover it. 00:27:07.032 [2024-07-12 17:35:25.462911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.462925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.463964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.463977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.464911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.464924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.465848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.465877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.466073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.466102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.466316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.466344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.466541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.466571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.466837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.466866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.467076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.467105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.467239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.467268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.467449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.467463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.467605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.467619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.467858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.467872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.468929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.468943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.469090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.469103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.469199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.469215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.033 [2024-07-12 17:35:25.469395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.033 [2024-07-12 17:35:25.469410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.033 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.469479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.469492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.469566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.469579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.469731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.469744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.469848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.469861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.470971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.470984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.471158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.471187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.471326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.471356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.471489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.471524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.471719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.471748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.471941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.471970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.472177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.472206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.472398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.472428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.472612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.472641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.472826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.472856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.473077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.473107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.473242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.473270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.473453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.473483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.473609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.473638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.473789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.473818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.474972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.474986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.475168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.475181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.475332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.034 [2024-07-12 17:35:25.475345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.034 qpair failed and we were unable to recover it. 00:27:07.034 [2024-07-12 17:35:25.475447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.475461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.475562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.475575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.475770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.475784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.475958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.475971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.476846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.476859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.477905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.477997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.478957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.478970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.479937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.479965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.480113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.480142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.480279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.480594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.480624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.480833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.480869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.480968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.480981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.481090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.481103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.481197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.481210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.481390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.481621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.481634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.035 qpair failed and we were unable to recover it. 00:27:07.035 [2024-07-12 17:35:25.481806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.035 [2024-07-12 17:35:25.481820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.481913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.481926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.482041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.482055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.482273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.482286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.482451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.482465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.482641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.482670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.482937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.482965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.483086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.483115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.483297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.483310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.483459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.483497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.483698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.483727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.483975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.484004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.484137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.484150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.484341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.484373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.484505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.484534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.484823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.484852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.485076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.485104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.485247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.485261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.485425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.485439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.485683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.485712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.485831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.485865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.486944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.486957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.487959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.487972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.488958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.036 [2024-07-12 17:35:25.488967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.036 qpair failed and we were unable to recover it. 00:27:07.036 [2024-07-12 17:35:25.489051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.489205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.489355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.489542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.489711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.489884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.489913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.490947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.490957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.491996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.492973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.492983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.493941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.493970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.494187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.494217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.494335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.494365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.494524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.494534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.494675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.494685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.037 [2024-07-12 17:35:25.494841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.037 [2024-07-12 17:35:25.494851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.037 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.495965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.495975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.496944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.496954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.497967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.497977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.498935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.499184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.499214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.499460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.499471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.499640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.499650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.038 [2024-07-12 17:35:25.499807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.038 [2024-07-12 17:35:25.499817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.038 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.500933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.500963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.501087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.501257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.501267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.501489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.501520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.501728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.501758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.501990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.502285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.502504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.502617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.502746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.502945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.502958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.503174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.503187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.503339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.503352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.503588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.503617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.503758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.503787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.503922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.504151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.504164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.504334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.504367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.504565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.504600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.504788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.504817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.504960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.504970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.505157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.505197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.505319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.505349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.505625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.505657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.505812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.505841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.506046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.506075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.506297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.506326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.506566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.506597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.506825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.506854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.506967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.506996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.507191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.507220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.507351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.507391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.507601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.507630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.507765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.507794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.039 qpair failed and we were unable to recover it. 00:27:07.039 [2024-07-12 17:35:25.508043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.039 [2024-07-12 17:35:25.508071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.508321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.508350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.508598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.508666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.508874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.508906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.509050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.509331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.509361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.509513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.509543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.509729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.509758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.509945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.509974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.510181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.510210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.510396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.510410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.510577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.510592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.510865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.510895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.511964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.511977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.512148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.512161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.512330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.512343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.512525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.512538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.512693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.512730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.512915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.512944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.513136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.513170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.513351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.513365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.513539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.513554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.513717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.513730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.513897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.513910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.514902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.514915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.515016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.515029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.515244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.515257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.515417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.515431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.515679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.515709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.515858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.515887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.516137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.040 [2024-07-12 17:35:25.516166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.040 qpair failed and we were unable to recover it. 00:27:07.040 [2024-07-12 17:35:25.516366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.516383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.516567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.516581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.516800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.516829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.517107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.517136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.517322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.517351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.517549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.517580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.517715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.517744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.517943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.517973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.518149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.518162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.518385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.518399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.518641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.518655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.518823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.518837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.519069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.519098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.519248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.519456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.519487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.519675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.519704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.519839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.519869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.520904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.520918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.521935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.521949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.522931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.522944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.523108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.523141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.523301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.523425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.523456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.523607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.523636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.523826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.523855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.041 [2024-07-12 17:35:25.524039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.041 [2024-07-12 17:35:25.524068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.041 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.524317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.524356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.524568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.524582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.524820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.524834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.524947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.524960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.525182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.525195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.525445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.525458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.525710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.525723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.525884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.525897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.526969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.526999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.527185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.527214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.527396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.527426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.527574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.527589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.527687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.527701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.527850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.527864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.528971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.528985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.529147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.529161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.529408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.529437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.529637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.529666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.529806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.529836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.529982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.530011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.530205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.530234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.530451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.530481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.530742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.530772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.530914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.530943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.531218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.531248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.531396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.531427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.531620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.531633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.531872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.042 [2024-07-12 17:35:25.531886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.042 qpair failed and we were unable to recover it. 00:27:07.042 [2024-07-12 17:35:25.532055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.532068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.532217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.532231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.532429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.532443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.532687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.532700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.532864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.532877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.533039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.533052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.533148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.533160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.533373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.533398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.533616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.533630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.533920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.533953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.534130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.534389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.534404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.534585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.534813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.534827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.534994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.535022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.535217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.535247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.535535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.535574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.535724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.535738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.535900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.535913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.536132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.536145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.536330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.536343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.536599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.536613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.536778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.536796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.536956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.536969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.537068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.537083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.537318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.537332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.537558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.537572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.537768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.537782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.537932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.537945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.538173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.538186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.538436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.538450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.538639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.538652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.538819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.538832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.539080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.043 [2024-07-12 17:35:25.539093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.043 qpair failed and we were unable to recover it. 00:27:07.043 [2024-07-12 17:35:25.539251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.539264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.539431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.539445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.539713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.539727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.539886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.539899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.540067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.540080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.540343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.540357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.540459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.540472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.540644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.540657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.540889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.540902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.541012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.541194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.541207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.541421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.541435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.541663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.541676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.541856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.541869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.542032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.542045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.542215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.542249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.542513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.542540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.542719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.542730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.542874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.542884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.543153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.543163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.543395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.543405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.543633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.543643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.543752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.544041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.544051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.544294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.544304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.544507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.544517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.544703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.544713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.544951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.544981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.545177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.545207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.545461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.545472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.545613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.545623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.545857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.545867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.546047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.546057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.546304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.546314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.546482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.546493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.546673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.546683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.546861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.546871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.547029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.547039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.547142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.547151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.547310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.547320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.044 [2024-07-12 17:35:25.547517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.044 [2024-07-12 17:35:25.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.044 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.547776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.547786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.548036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.548046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.548207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.548217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.548370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.548384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.548612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.548621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.548762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.548772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.549011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.549021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.549251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.549261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.549488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.549499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.549756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.549766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.549916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.549926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.550139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.550354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.550363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.550534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.550544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.550779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.550791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.550942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.550951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.551195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.551204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.551362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.551372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.551536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.551546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.551689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.551699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.551912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.551922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.552158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.552168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.552341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.552351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.552581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.552591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.552688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.552697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.552905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.552915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.553157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.553167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.553326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.553336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.553588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.553599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.553749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.553759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.553900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.553910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.554146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.554156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.554245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.554254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.554472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.554482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.554664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.554674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.554883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.554892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.555049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.555059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.555291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.555300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.555521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.555531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.555793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.555803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.045 qpair failed and we were unable to recover it. 00:27:07.045 [2024-07-12 17:35:25.556011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.045 [2024-07-12 17:35:25.556021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.556267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.556277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.556365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.556374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.556622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.556632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.556844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.556854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.557036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.557045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.557204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.557215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.557445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.557456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.557677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.557687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.557896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.557906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.558131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.558141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.558316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.558326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.558421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.558430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.558654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.558664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.558914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.558925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.559093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.559290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.559507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.559686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.559846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.559996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.560240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.560340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.560441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.560692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.560865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.560875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.561938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.561948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.562185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.562195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.562411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.562421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.562660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.562670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.562930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.562940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.563124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.563134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.563289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.563299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.563399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.563408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.563604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.563614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.563846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.046 [2024-07-12 17:35:25.563856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.046 qpair failed and we were unable to recover it. 00:27:07.046 [2024-07-12 17:35:25.564083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.564112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.564311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.564341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.564612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.564622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.564847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.564856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.565094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.565246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.565462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.565663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.565829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.565992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.566002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.566186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.566196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.566411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.566421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.566582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.566592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.566825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.566837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.567033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.567043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.567251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.567261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.567436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.567446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.567607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.567617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.567791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.567801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.568058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.568068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.568241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.568251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.568427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.568438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.568597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.568607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.568815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.568825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.569060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.569070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.569314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.569324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.569560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.569571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.569813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.569823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.569990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.570000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.570238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.570248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.570483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.570665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.570675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.570926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.570955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.571223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.571252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.047 qpair failed and we were unable to recover it. 00:27:07.047 [2024-07-12 17:35:25.571389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.047 [2024-07-12 17:35:25.571419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.571672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.571702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.571954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.571984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.572166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.572196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.572457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.572467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.572701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.572711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.572893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.572903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.573925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.573934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.574165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.574175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.574442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.574551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.574560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.574702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.574712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.574808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.574817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.575916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.575926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.576106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.576116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.576324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.576334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.576541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.576551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.576707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.576717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.576804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.576813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.577052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.577062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.577330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.577340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.577597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.577608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.577838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.577849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.578087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.578096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.578254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.578264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.578454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.578464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.578671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.578834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.578844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.579022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.579032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.579265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.579275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.579465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.048 [2024-07-12 17:35:25.579475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.048 qpair failed and we were unable to recover it. 00:27:07.048 [2024-07-12 17:35:25.579588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.579598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.579714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.579723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.579991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.580109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.580269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.580435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.580682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.580913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.580922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.581177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.581187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.581432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.581443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.581595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.581605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.581821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.581832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.582015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.582025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.582262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.582272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.582346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.582355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.582493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.582503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.582762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.582771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.583011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.583021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.583109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.583120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.583385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.583395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.583650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.583660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.583880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.583889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.584108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.584118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.584282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.584452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.584463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.584613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.584622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.584853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.584865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.585027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.585038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.585198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.585208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.585393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.585403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.585560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.585571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.585824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.585834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.586022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.586032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.586239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.586249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.586519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.586530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.586687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.586697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.586880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.586890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.587043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.587053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.587211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.587221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.587431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.587441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.049 [2024-07-12 17:35:25.587583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.049 [2024-07-12 17:35:25.587593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.049 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.587827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.587837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.588071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.588081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.588242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.588252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.588471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.588481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.588714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.588723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.588954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.588964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.589892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.589902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.590136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.590146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.590401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.590411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.590585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.590595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.590886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.590896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.591121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.591131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.591227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.591238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.591400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.591410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.591621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.591632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.591878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.591888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.592853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.592863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.593041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.593051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.593202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.593212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.593419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.593429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.593572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.593582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.593834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.593844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.594060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.594070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.594177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.594187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.594418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.594428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.594664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.594674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.594859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.594869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.595051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.595061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.595242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.595252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.595457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.050 qpair failed and we were unable to recover it. 00:27:07.050 [2024-07-12 17:35:25.595621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.050 [2024-07-12 17:35:25.595631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.595789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.595799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.596029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.596039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.596268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.596278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.596491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.596501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.596672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.596682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.596915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.596925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.597112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.597122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.597298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.597308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.597543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.597553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.597760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.597769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.598857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.598867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.599100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.599111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.599254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.599264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.599479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.599489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.599712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.599722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.599830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.599840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.600978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.600988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.601163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.601187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.601440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.601472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.601746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.601777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.602076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.602106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.602315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.602345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.602592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.051 [2024-07-12 17:35:25.602624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.051 qpair failed and we were unable to recover it. 00:27:07.051 [2024-07-12 17:35:25.602734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.602744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.602982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.602992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.603119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.603129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.603336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.603346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.603489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.603500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.603670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.603680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.603818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.603829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.604975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.604985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.605217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.605227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.605479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.605490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.605641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.605651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.605811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.605821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.606053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.606063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.606227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.606236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.606468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.606478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.606693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.606703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.606891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.606901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.607092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.607104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.607362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.607373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.607531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.607541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.607725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.607735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.607956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.607965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.608116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.608126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.608362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.608372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.608661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.608692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.608913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.608942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.609163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.609193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.609392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.609423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.609603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.609613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.609752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.609762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.609923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.609932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.610122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.610132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.610360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.610370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.610542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.610552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.052 [2024-07-12 17:35:25.610761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.052 [2024-07-12 17:35:25.610771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.052 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.611889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.611898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.612118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.612290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.612457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.612649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.612839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.612993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.613116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.613253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.613515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.613702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.613889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.613902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.614085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.614098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.614199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.614213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.614471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.614485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.614642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.614655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.614840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.614854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.615020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.615196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.615209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.615399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.615414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.615650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.615664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.615954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.615968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.616154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.616167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.616332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.616345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.616535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.616548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.616642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.616654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.616823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.616837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.617082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.617096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.617310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.617323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.617483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.617498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.617604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.617618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.617853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.617866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.618030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.618044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.618197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.618211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.618295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.618307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.053 [2024-07-12 17:35:25.618535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.053 [2024-07-12 17:35:25.618549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.053 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.618666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.618679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.618894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.618907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.619016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.619029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.619305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.619318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.619488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.619502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.619694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.619707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.619944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.619957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.620131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.620144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.620383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.620400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.620654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.620668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.620904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.620917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.621079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.621094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.621190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.621202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.621389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.621403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.621586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.621600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.621829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.621843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.622959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.622975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.623875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.623885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.624946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.625152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.625162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.625351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.625360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.625612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.054 [2024-07-12 17:35:25.625623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.054 qpair failed and we were unable to recover it. 00:27:07.054 [2024-07-12 17:35:25.625785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.625795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.625963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.625972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.626940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.626949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.627091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.627101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.627339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.627349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.627578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.627588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.627839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.627849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.627941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.627950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.628103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.628113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.628371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.628385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.628553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.628563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.628791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.628801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.628943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.628953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.629211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.629241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.629366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.629420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.629545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.629575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.629815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.629826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.629968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.629977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.630182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.630193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.630401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.630630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.630640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.630800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.630810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.630900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.630909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.631071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.631082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.631173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.631183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.631344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.631354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.631625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.631656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.631790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.631819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.632024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.632054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.632304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.632333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.632546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.632557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.632698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.632708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.632859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.632869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.633089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.055 [2024-07-12 17:35:25.633099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.055 qpair failed and we were unable to recover it. 00:27:07.055 [2024-07-12 17:35:25.633336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.633346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.633553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.633563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.633746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.633756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.633912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.633922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.634132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.634143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.634253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.634264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.634499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.634510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.634740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.634750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.635964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.635978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.636086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.636244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.636358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.636626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.636826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.636990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.637189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.637353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.637593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.637768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.637866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.637879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.638042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.638058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.638226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.638409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.638423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.638690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.638703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.056 qpair failed and we were unable to recover it. 00:27:07.056 [2024-07-12 17:35:25.638821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.056 [2024-07-12 17:35:25.638835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.639049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.639063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.639300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.639314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.639506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.639520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.639689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.639702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.639938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.639952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.640166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.640180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.640360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.640373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.640591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.640605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.640770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.640784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.640883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.640897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.641100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.641469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.641651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.641853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.641986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.642282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.642539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.642652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.642847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.642978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.642991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.643146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.643158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.643338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.643348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.643509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.643520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.643671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.643681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.643906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.643935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.644101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.644131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.644327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.644357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.644592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.644603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.644695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.644704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.644913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.644922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.645977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.645986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.646147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.646156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.646250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.057 [2024-07-12 17:35:25.646259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.057 qpair failed and we were unable to recover it. 00:27:07.057 [2024-07-12 17:35:25.646399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.646408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.646491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.646500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.646686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.646695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.646787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.646796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.646952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.646961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.647966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.647975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.648948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.648957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.649964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.649973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.650138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.650291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.650510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.650678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.650844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.650998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.651978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.058 [2024-07-12 17:35:25.651988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.058 qpair failed and we were unable to recover it. 00:27:07.058 [2024-07-12 17:35:25.652130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.652316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.652422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.652602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.652758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.652919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.652929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.653917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.653927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.654926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.654936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.655944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.655954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.656989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.656998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.059 [2024-07-12 17:35:25.657931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.059 [2024-07-12 17:35:25.657940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.059 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.658988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.658998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.659173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.659182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.659324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.659334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.659476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.659486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.659715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.659739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.659874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.660098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.660127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.660275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.660304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.660435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.660466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.660646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.660676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.660870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.660879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.661886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.661915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.662191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.662220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.662406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.662436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.662583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.662612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.662815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.662856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.663019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.663029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.663116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.663126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.663271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.663280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.663419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.060 [2024-07-12 17:35:25.663429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.060 qpair failed and we were unable to recover it. 00:27:07.060 [2024-07-12 17:35:25.663578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.663587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.663892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.663921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.664034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.664063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.664341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.664370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.664554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.664584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.664727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.664756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.664934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.664943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.665177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.665186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.665417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.665449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.665634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.665664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.665862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.665891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.666967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.666977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.667058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.667250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.667429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.667584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.667839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.668939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.668948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.669838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.669847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.670001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.670010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.670218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.670227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.670307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.061 [2024-07-12 17:35:25.670316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.061 qpair failed and we were unable to recover it. 00:27:07.061 [2024-07-12 17:35:25.670408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.670518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.670613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.670728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.670808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.670970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.670979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.671967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.671977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.672866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.672875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.673778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.673787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.674789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.674799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.675983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.675993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.676225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.676235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.062 [2024-07-12 17:35:25.676459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.062 [2024-07-12 17:35:25.676469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.062 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.676586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.676596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.676846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.676856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.677828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.677837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.678849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.678859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.679022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.679032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.679230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.679260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.679467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.679478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.679651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.679660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.679852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.679862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.680098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.680108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.680345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.680354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.680529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.680539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.680771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.680801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.680991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.681021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.681241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.681271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.681540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.681576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.681805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.681840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.682082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.682092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.682278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.682287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.682440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.682450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.682625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.682635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.682811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.682820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.683942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.683951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.684187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.684198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.684423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.684434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.063 [2024-07-12 17:35:25.684664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.063 [2024-07-12 17:35:25.684674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.063 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.684883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.684892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.685048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.685057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.685241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.685251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.685508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.685519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.685753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.685763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.685920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.685930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.686176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.686186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.686414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.686424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.686631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.686928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.686938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.687097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.687107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.687215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.687225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.687365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.687375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.687622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.687632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.687842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.687851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.688057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.688067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.688226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.688236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.688467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.688477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.688663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.688673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.688832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.688863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.689140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.689170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.689458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.689468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.689704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.689714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.689988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.689998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.690193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.690206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.690449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.690461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.690627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.690636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.690874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.690884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.691083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.691093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.691355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.691365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.691633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.691643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.691802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.691811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.692029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.692039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.692295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.064 [2024-07-12 17:35:25.692305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.064 qpair failed and we were unable to recover it. 00:27:07.064 [2024-07-12 17:35:25.692475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.692486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.692694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.692704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.692809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.692821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.693021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.693261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.693452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.693641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.693833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.693991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.694086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.694312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.694528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.694720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.694906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.694916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.695167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.695412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.695423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.695630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.695639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.695814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.695824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.695912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.695922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.696882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.696892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.697069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.697079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.697305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.697315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.697575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.697585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.697734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.697744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.697928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.697938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.698105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.698319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.698328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.698564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.698672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.698681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.698838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.698847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.699023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.699033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.699177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.699187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.699269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.699278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.065 qpair failed and we were unable to recover it. 00:27:07.065 [2024-07-12 17:35:25.699520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.065 [2024-07-12 17:35:25.699530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.699786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.699796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.699882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.699891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.700929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.700939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.701192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.701202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.701392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.701402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.701664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.701674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.701787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.701796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.701957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.701967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.702908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.702917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.703826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.703836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.704816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.705968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.705978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.706140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.066 [2024-07-12 17:35:25.706149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.066 qpair failed and we were unable to recover it. 00:27:07.066 [2024-07-12 17:35:25.706317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.706326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.706558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.706569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.706740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.706750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.706863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.706873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.707106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.707115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.707394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.707404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.707559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.707569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.707678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.707688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.707966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.707976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.708196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.708206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.708392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.708402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.708536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.708545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.708639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.708648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.708852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.708862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.709024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.709033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.709225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.709235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.709454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.709486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.709625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.709654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.709851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.709881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.710105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.710135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.710448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.710488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.710633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.710667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.710861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.710895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.711058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.711071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.711331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.711344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.711559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.711573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.711687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.711700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.711960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.711973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.712176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.712190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.712349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.712363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.712536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.712550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.712741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.712755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.712922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.712935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.713196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.713215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.713430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.713445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.713615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.713628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.713719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.713731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.713950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.713963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.714205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.714218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.714386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.067 [2024-07-12 17:35:25.714400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.067 qpair failed and we were unable to recover it. 00:27:07.067 [2024-07-12 17:35:25.714618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.714631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.714886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.714899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.715157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.715170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.715400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.715414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.715514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.715527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.715700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.715713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.715882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.715895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.716077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.716090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.716360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.716373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.716652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.716682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.716834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.716863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.717137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.717166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.717299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.717328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.717584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.717598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.717765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.717779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.717938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.717951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.718213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.718226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.718388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.718405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.718645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.718658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.718824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.718837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.719085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.719106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.719329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.719343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.719514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.719529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.719744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.719758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.719975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.719988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.720304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.720317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.720564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.720578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.720762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.720775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.720989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.721002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.721230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.721244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.721494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.721508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.721681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.721694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.721885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.721899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.722153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.722167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.722341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.722355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.722541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.722555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.722796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.722810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.723078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.723092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.723253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.723266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.723434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.723703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.068 [2024-07-12 17:35:25.723716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.068 qpair failed and we were unable to recover it. 00:27:07.068 [2024-07-12 17:35:25.723929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.723942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.724155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.724169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.724423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.724437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.724619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.724633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.724749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.724762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.724879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.724892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.725045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.725061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.725237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.725250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.725475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.725489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.725712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.725726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.725919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.725932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.726144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.726158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.726338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.726351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.726609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.726623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.726837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.726851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.727012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.727026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.727247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.727261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.727421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.727436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.727597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.727627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.727770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.727800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.728022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.728052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.728341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.728370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.728660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.728690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.728936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.728950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.729228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.729241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.729479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.729493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.729604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.729618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.729784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.729797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.729915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.729929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.730168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.730182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.730335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.730348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.730452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.730465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.730641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.730654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.730850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.730865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.069 qpair failed and we were unable to recover it. 00:27:07.069 [2024-07-12 17:35:25.731075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.069 [2024-07-12 17:35:25.731089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.731265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.731278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.731384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.731397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.731587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.731600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.731753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.731767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.732960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.732973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.733211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.733224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.733468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.733482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.733668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.733682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.733896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.733910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.734068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.734081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.734318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.734331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.734558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.734572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.734842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.734856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.734970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.734983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.735220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.735234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.735433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.735447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.735661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.735674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.735883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.735912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.736197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.736227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.736523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.736554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.736745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.736779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.736966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.736996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.737211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.737241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.737439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.737470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.737655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.737684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.737976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.738004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.738265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.738294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.738546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.738576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.738760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.738789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.739064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.739093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.739294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.739323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.739582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.739612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.070 [2024-07-12 17:35:25.739888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.070 [2024-07-12 17:35:25.739917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.070 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.740214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.740227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.740454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.740468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.740684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.740698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.740942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.740955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.741194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.741207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.741451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.741465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.741631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.741644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.741882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.741896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.741979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.741992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.742209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.742222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.742436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.742450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.742683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.742697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.742980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.742993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.743234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.743248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.743416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.743430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.743682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.743711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.743922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.743951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.744162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.744191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.744418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.744449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.744650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.744664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.744885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.744914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.745128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.745157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.745343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.745372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.745568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.745598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.745847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.745876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.746832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.747094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.747124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.747307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.747336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.747605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.747635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.747843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.747872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.748150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.748164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.748381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.748395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.748583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.748596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.748749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.748762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.748982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.749011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.749213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.071 [2024-07-12 17:35:25.749242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.071 qpair failed and we were unable to recover it. 00:27:07.071 [2024-07-12 17:35:25.749519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.749550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.749805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.749835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.750053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.750082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.750315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.750328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.750566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.750579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.750744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.750757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.750979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.751008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.751285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.751313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.751563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.751856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.751885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.752158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.752187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.752439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.752470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.752731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.752761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.753029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.753059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.753357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.753403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.753665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.753678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.753918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.753931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.754130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.754143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.754298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.754311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.754560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.754591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.754804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.754833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.755127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.755156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.755421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.755452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.755753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.755782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.755997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.756010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.756155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.756168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.756403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.756433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.756642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.756655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.756927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.756957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.757237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.757266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.757475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.757506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.757754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.757783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.758001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.758031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.758326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.758355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.758552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.758582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.758880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.758909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.759175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.759188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.759286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.759299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.759392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.759406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.759648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.759662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.759924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.072 [2024-07-12 17:35:25.759953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.072 qpair failed and we were unable to recover it. 00:27:07.072 [2024-07-12 17:35:25.760139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.760178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.760430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.760462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.760654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.760683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.760875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.760904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.761083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.761096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.761324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.761353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.761572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.761603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.761796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.761825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.762036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.762065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.762313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.762343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.762609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.762639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.762862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.762899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.763140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.763153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.763368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.763387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.763629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.763643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.763804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.763817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.764050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.764064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.764333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.764363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.764574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.764603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.764808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.764837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.765104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.765117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.765269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.765283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.765488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.765519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.765723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.765752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.766038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.766067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.766251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.766280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.766550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.766580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.766826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.766840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.767047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.767061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.767280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.767293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.767497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.767528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.767711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.767740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.768009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.768037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.768245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.768274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.768578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.768609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.073 [2024-07-12 17:35:25.768810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.073 [2024-07-12 17:35:25.768823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.073 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.769060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.769073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.769232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.769245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.769475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.769489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.769689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.769721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.769976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.770005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.770343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.770427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.770654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.770686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.770883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.770913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.771156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.771170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.771390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.771405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.771648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.771661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.771819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.771833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.772094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.772123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.772314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.772343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.772616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.772647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.772938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.772951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.773066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.773080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.773250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.773264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.773434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.773452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.773548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.773561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.773789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.773818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.774068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.774097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.774357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.774394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.774698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.774728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.774963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.774977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.775246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.775274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.775479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.775510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.775661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.775690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.074 [2024-07-12 17:35:25.775874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.074 [2024-07-12 17:35:25.775887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.074 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.775991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.776004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.776197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.776211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.776451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.776464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.776641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.776654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.776866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.776896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.777079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.777108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.777304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.777333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.777588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.777618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.777893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.777928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.778110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.778123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.778367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.778403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.778656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.778685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.779045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.779084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.779302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.779331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.779615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.779646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.075 [2024-07-12 17:35:25.779873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.075 [2024-07-12 17:35:25.779887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.075 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.780155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.780182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.780426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.780440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.780657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.780669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.780926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.780936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.781193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.781203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.781420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.781431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.781694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.781704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.781917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.781926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.782126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.782136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.782356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.782397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.782594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.782624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.782841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.782872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.783967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.783977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.784830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.784839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.785000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.785010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.785166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.785176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.785397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.785408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.785674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.785684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.785923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.785933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.786081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.786091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.786333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.357 [2024-07-12 17:35:25.786343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.357 qpair failed and we were unable to recover it. 00:27:07.357 [2024-07-12 17:35:25.786442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.786452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.786706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.786716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.786926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.786936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.787105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.787114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.787280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.787289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.787518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.787528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.787743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.787752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.787909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.787918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.788089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.788098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.788307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.788317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.788550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.788560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.788718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.788727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.788954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.788964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.789190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.789201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.789431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.789441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.789651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.789661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.789749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.789758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.789841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.789850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.790005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.790015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.790191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.790201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.790353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.790363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.790597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.790608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.790859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.790868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.791105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.791115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.791270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.791279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.791369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.791383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.791589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.791599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.791806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.791816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.792032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.792041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.792268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.358 [2024-07-12 17:35:25.792278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.358 qpair failed and we were unable to recover it. 00:27:07.358 [2024-07-12 17:35:25.792557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.792568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.792657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.792666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.792874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.792884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.793106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.793115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.793412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.793422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.793646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.793675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.793930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.793960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.794238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.794267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.794567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.794597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.794892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.794902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.795183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.795415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.795425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.795584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.795593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.795696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.795705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.795935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.795945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.796151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.796161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.796306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.796316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.796466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.796476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.796581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.796591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.796771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.796782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.797015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.797024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.797260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.797270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.797425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.797435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.797668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.797698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.797894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.797923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.798135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.798164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.798408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.798420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.798651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.798660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.798897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.798907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.799171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.359 [2024-07-12 17:35:25.799180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.359 qpair failed and we were unable to recover it. 00:27:07.359 [2024-07-12 17:35:25.799346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.799356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.799588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.799598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.799833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.799843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.799952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.799962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.800167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.800177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.800403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.800414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.800597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.800607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.800701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.800711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.800892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.800902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.801129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.801138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.801270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.801280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.801527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.801537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.801744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.801754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.801984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.801994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.802282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.802292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.802474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.802485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.802669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.802680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.802778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.802787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.802952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.803118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.803128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.803313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.803324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.803474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.803485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.803744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.803754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.803908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.803919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.804159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.804189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.804326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.804355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.804584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.804615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.804808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.804818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.804914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.804923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.805022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.805033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.805288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.805298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.805472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.805482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.805645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.805654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.360 [2024-07-12 17:35:25.805868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.360 [2024-07-12 17:35:25.805878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.360 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.806064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.806074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.806303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.806313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.806546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.806556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.806799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.806809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.806977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.806987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.807215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.807225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.807374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.807388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.807496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.807506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.807648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.807658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.807814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.807824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.808948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.808958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.809953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.809963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.810202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.810500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.810510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.810648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.810658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.810885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.810895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.811054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.811063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.361 [2024-07-12 17:35:25.811222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.361 [2024-07-12 17:35:25.811232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.361 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.811399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.811409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.811581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.811591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.811772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.811782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.811892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.811902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.812125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.812135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.812316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.812326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.812468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.812478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.812631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.812643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.812803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.812813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.813931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.813941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.814162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.814329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.814557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.814727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.814838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.814994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.815004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.362 qpair failed and we were unable to recover it. 00:27:07.362 [2024-07-12 17:35:25.815236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.362 [2024-07-12 17:35:25.815246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.815355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.815462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.815632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.815725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.815888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.815993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.816214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.816314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.816476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.816679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.816896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.816906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.817965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.817975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.818220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.818230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.818480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.818491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.818649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.818658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.818866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.818876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.819142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.819152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.819372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.819391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.819643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.819655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.819817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.819827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.820032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.820041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.820227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.363 [2024-07-12 17:35:25.820237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.363 qpair failed and we were unable to recover it. 00:27:07.363 [2024-07-12 17:35:25.820458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.820468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.820677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.820687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.820783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.820792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.821948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.821958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.822129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.822158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.822404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.822435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.822625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.822654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.822913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.822923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.823133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.823144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.823385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.823395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.823581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.823591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.823743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.823753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.823987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.824016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.824161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.824190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.824415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.824445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.824670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.824699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.824928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.824957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.825835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.825999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.826009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.826228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.826257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.826459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.826493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.826691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.826721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.364 [2024-07-12 17:35:25.826987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.364 [2024-07-12 17:35:25.826999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.364 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.827882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.827891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.828917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.828927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.829937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.829946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.830047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.830057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.830227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.830237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.830450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.830481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.830605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.830634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.830774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.830803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.831055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.831085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.831276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.831305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.831573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.831603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.831814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.831843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.832136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.832166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.832303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.832332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.832532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.832562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.832842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.365 [2024-07-12 17:35:25.832871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.365 qpair failed and we were unable to recover it. 00:27:07.365 [2024-07-12 17:35:25.833088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.833267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.833448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.833620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.833861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.833975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.833985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.834912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.834922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.835989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.835999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.836988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.836998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.837915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.837924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.838126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.838136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.838297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.366 [2024-07-12 17:35:25.838307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.366 qpair failed and we were unable to recover it. 00:27:07.366 [2024-07-12 17:35:25.838453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.838464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.838616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.838626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.838732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.838742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.838946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.838957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.839121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.839130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.839352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.839404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.839560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.839590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.839887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.839917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.840198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.840227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.840460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.840491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.840639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.840669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.840826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.841128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.841158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.841368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.841407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.841626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.841790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.841819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.842054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.842064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.842297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.842306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.842534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.842544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.842777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.842789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.842950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.842959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.843080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.843090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.843267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.843277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.843421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.843450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.843657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.843686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.843833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.843862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.844098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.844108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.844347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.844357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.844453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.844584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.844594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.367 qpair failed and we were unable to recover it. 00:27:07.367 [2024-07-12 17:35:25.844753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.367 [2024-07-12 17:35:25.844763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.844931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.844941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.845042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.845052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.845209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.845219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.845360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.845369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.845570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.845599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.845732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.845762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.846032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.846062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.846357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.846367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.846569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.846580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.846794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.846825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.847102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.847134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.847358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.847411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.847656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.847686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.847955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.847984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.848211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.848240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.848462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.848493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.848719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.848749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.849051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.849080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.849323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.849333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.849522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.849533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.849632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.849641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.849868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.849878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.850132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.850142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.850339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.850350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.850563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.850575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.850754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.850764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.851047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.851076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.368 qpair failed and we were unable to recover it. 00:27:07.368 [2024-07-12 17:35:25.851354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.368 [2024-07-12 17:35:25.851391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.851630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.851665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.851932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.851962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.852109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.852138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.852357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.852397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.852539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.852569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.852801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.852830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.853133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.853163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.853347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.853386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.853583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.853613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.853794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.853823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.854078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.854088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.854295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.854305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.854539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.854550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.854718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.854747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.854986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.855198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.855478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.855586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.855761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.855930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.855940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.856173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.856182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.856318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.856348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.856508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.856538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.856694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.856723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.856920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.856932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.857057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.857067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.857317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.857347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.857638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.857706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.857933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.857965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.858260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.858290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.858541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.858572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.858694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.858723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.369 qpair failed and we were unable to recover it. 00:27:07.369 [2024-07-12 17:35:25.858972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.369 [2024-07-12 17:35:25.859001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.859268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.859282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.859539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.859553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.859768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.859781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.859967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.859980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.860083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.860096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.860330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.860509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.860524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.860693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.860733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.861016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.861045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.861323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.861352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.861685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.861757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.862036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.862079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.862315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.862329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.862492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.862506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.862673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.862715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.862941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.862970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.863173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.863202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.863346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.863360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.863540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.863554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.863814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.863843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.864064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.864094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.864325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.864355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.864552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.864582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.864729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.864758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.865025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.865054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.865315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.865344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.865552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.865583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.865849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.865878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.866080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.370 [2024-07-12 17:35:25.866094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.370 qpair failed and we were unable to recover it. 00:27:07.370 [2024-07-12 17:35:25.866306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.866320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.866542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.866556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.866740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.866753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.866854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.866870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.867101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.867114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.867278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.867294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.867487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.867518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.867737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.867766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.867969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.867998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.868210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.868239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.868438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.868452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.868668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.868682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.868873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.868886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.869050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.869063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.869317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.869346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.869556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.869586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.869808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.869837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.870091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.870105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.870280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.870293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.870536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.870571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.870758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.870788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.871956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.871966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.872231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.872261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.872474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.872521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.872800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.872830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.873105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.873135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.873393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.873402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.873618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.873630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.873839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.873849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.874082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.874111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.874245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.874275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.874466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.874496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.371 qpair failed and we were unable to recover it. 00:27:07.371 [2024-07-12 17:35:25.874764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.371 [2024-07-12 17:35:25.874793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.875076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.875106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.875360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.875370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.875593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.875603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.875759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.875769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.876039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.876069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.876270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.876299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.876513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.876543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.876689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.876718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.876914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.876943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.877241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.877271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.877397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.877427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.877564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.877593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.877800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.877830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.878014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.878024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.878199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.878229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.878510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.878541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.878742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.878771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.878917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.879153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.879182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.879403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.879434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.879618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.879648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.879846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.879876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.880090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.880100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.882612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.882646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.882788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.882817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.885631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.885666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.885898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.885906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.886912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.886922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.887029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.887039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.887289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.887323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.887619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.887650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.887807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.887836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.888134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.888163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.888433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.888443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.888539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.372 [2024-07-12 17:35:25.888548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.372 qpair failed and we were unable to recover it. 00:27:07.372 [2024-07-12 17:35:25.888775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.888785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.888886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.888895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.889089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.889323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.889448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.889621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.889880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.889985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.890006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.890247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.890257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.890405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.890416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.890657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.890686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.890965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.890994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.891198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.891228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.891473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.891504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.891755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.891784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.891980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.892185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.892355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.892530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.892758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.892875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.892885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.893193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.893260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.893479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.893514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.893724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.893754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.893950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.893979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.894180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.894210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.894469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.894483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.894651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.894665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.894884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.894897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.895169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.895198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.895485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.895516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.895719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.895748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.895979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.895992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.896214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.896227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.896443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.373 [2024-07-12 17:35:25.896461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.373 qpair failed and we were unable to recover it. 00:27:07.373 [2024-07-12 17:35:25.896707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.896736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.896941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.896970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.897243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.897272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.897557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.897588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.897786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.897815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.898085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.898098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.898319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.898332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.898497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.898512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.898728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.898742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.898912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.898926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.899019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.899031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.899195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.899209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.899425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.899439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.899726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.899755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.900083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.900113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.900361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.900375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.900552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.900566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.900798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.900812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.901041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.901070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.901269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.901298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.901509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.901539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.901743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.901771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.902955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.903201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.903231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.903462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.903494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.903670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.903700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.903976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.904005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.904253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.904295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.904541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.904557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.904721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.904734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.904995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.905024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.905275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.905304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.905514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.374 [2024-07-12 17:35:25.905529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.374 qpair failed and we were unable to recover it. 00:27:07.374 [2024-07-12 17:35:25.905699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.905933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.905963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.906271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.906301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.906500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.906531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.906725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.906754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.906957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.906986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.907183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.907213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.907508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.907539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.907766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.907795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.908055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.908084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.908342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.908372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.908609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.908622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.908746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.908759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.908859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.908872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.909237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.909532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.909567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.909821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.909851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.910172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.910201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.910484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.910515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.910673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.910703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.910850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.910879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.911144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.911173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.911446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.911460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.911629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.911642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.911802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.911815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.912003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.912017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.912280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.912294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.912483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.912497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.912688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.912717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.912876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.912906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.913196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.913225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.913407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.913421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.913522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.913534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.913698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.913711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.913954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.913985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.375 qpair failed and we were unable to recover it. 00:27:07.375 [2024-07-12 17:35:25.914224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.375 [2024-07-12 17:35:25.914254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.914490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.914505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.914675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.914688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.914907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.914921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.915186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.915215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.915353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.915461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.915685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.915715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.915917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.915952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.916254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.916283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.916474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.916488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.916635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.916648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.916891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.916904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.917846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.917860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.918075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.918089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.918237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.918251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.918504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.918535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.918683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.918713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.918965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.918994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.919190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.919219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.919352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.919403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.919559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.919588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.919841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.919870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.920080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.920110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.920321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.920350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.920510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.920540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.920831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.920860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.921144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.921174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.921436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.921484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.921674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.921688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.921848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.921864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.922103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.922132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.922301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.922330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.922485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.922516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.922674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.922703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.922979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.923009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.376 [2024-07-12 17:35:25.923283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.376 [2024-07-12 17:35:25.923311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.376 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.923569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.923584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.923711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.923724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.923990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.924020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.924291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.924305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.924501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.924515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.924666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.924679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.924877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.924907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.925173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.925203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.925503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.925518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.925618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.925631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.925737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.925751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.925911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.925925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.926135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.926148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.926389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.926403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.926639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.926653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.926764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.926778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.926939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.926952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.927233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.927247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.927415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.927429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.927591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.927604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.927879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.927892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.928144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.928158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.928334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.928348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.928552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.928565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.928736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.928749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.928910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.928953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.929198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.929228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.929480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.929494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.929712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.929725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.377 [2024-07-12 17:35:25.929940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.377 [2024-07-12 17:35:25.929953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.377 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.930193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.930207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.930471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.930485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.930608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.930621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.930746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.930759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.930913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.930931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.931133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.931147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.931312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.931325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.931566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.931580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.931866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.931880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.932106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.932120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.932359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.932373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.932567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.932581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.932759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.932788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.932990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.933020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.933273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.933302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.933537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.933551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.933719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.933749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.934002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.934031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.934233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.934268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.934496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.934510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.934748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.934761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.934924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.934937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.935207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.935220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.935389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.935403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.935514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.935528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.935622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.935635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.935825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.935838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.936963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.936976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.937136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.937150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.937346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.937359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.378 qpair failed and we were unable to recover it. 00:27:07.378 [2024-07-12 17:35:25.937462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.378 [2024-07-12 17:35:25.937475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.937701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.937714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.937928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.938117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.938130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.938373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.938414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.938554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.938584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.938854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.938883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.939033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.939062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.939222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.939470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.939485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.939649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.939662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.939885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.939914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.940126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.940156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.940339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.940367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.940568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.940582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.940819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.940833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.940924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.940938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.941026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.941038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.941268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.941282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.941444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.941458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.941693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.941707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.941946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.941960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.942822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.942835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.943106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.943120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.943337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.943351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.943592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.943607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.943822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.943836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.944029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.944043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.944237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.379 [2024-07-12 17:35:25.944267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.379 qpair failed and we were unable to recover it. 00:27:07.379 [2024-07-12 17:35:25.944471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.944502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.944697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.944726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.944953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.944982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.945239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.945268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.945430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.945443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.945680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.945709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.945894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.945923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.946220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.946250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.946443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.946473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.946693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.946722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.946948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.946977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.947227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.947255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.947519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.947549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.947821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.947850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.948139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.948168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.948359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.948402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.948578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.948806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.948820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.949072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.949102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.949355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.949411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.949706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.949735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.949932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.949961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.950176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.950205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.950451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.950465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.950702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.950715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.950876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.950890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.951148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.951177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.951459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.951489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.951719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.951747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.951936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.952241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.952536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.952705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.952820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.952951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.952964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.953131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.953145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.953370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.953391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.953634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.953648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.953912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.953926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.954144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.954157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.380 [2024-07-12 17:35:25.954407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.380 [2024-07-12 17:35:25.954420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.380 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.954584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.954598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.954752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.954769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.954886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.954900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.954992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.955004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.955172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.955185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.955432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.955446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.955624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.955637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.955816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.955847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.956152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.956181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.956370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.956411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.956707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.956737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.956869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.956899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.957148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.957178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.957371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.957412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.957686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.957715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.957856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.957886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.958115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.958144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.958347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.958386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.958658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.958939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.958968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.959249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.959278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.959476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.959490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.959608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.959621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.959719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.959731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.959901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.959914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.960026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.960039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.960199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.960212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.960407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.960438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.960716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.960782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.960933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.960966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.961220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.961250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.961520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.961554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.961695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.961705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.961818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.961828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.961930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.961939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.381 qpair failed and we were unable to recover it. 00:27:07.381 [2024-07-12 17:35:25.962980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.381 [2024-07-12 17:35:25.962990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.963082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.963091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.963257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.963267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.963533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.963564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.963711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.963740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.963997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.964027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.964302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.964332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.964548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.964559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.964718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.964728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.964879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.964889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.965048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.965058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.965229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.965238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.965340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.965350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.965590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.965621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.965806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.965835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.966092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.966133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.966286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.966296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.966525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.966536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.966679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.966689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.966927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.966938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.967095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.967105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.967339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.967349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.967628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.967639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.967885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.967894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.968956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.968965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.969125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.969135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.969223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.969232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.969391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.969401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.969495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.382 [2024-07-12 17:35:25.969504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.382 qpair failed and we were unable to recover it. 00:27:07.382 [2024-07-12 17:35:25.969736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.969746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.969927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.969938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.970040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.970060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.970201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.970210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.970427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.970458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.970600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.970630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.970816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.970845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.971165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.971195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.971476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.971486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.971709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.971719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.971930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.971940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.972184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.972194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.972345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.972354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.972525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.972548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.972694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.972723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.972905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.972934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.973211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.973240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.973538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.973568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.973724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.973754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.973897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.973926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.974144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.974173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.974472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.974503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.974693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.974724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.974928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.974957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.975213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.975242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.975457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.975488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.975769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.975798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.976000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.976030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.976241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.976270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.976511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.976521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.976665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.976675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.976788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.976797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.383 qpair failed and we were unable to recover it. 00:27:07.383 [2024-07-12 17:35:25.977009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.383 [2024-07-12 17:35:25.977039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.977254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.977496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.977527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.977719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.977749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.977876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.977905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.978947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.978958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.979852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.979888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.980174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.980204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.980396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.980426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.980661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.980670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.980781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.980791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.980951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.980961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.981841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.981850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.982973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.982982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.384 [2024-07-12 17:35:25.983164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.384 [2024-07-12 17:35:25.983174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.384 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.983319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.983329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.983507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.983716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.983745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.983872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.983906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.984981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.984991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.985978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.985988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.986907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.986917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.987194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.987224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.987513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.987543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.987757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.987767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.987875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.987885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.988058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.988297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.988411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.988606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.988760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.988981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.989011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.989240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.989269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.989497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.989538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.989766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.989776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.989887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.989897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.990152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.990181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.990420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.990451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.385 qpair failed and we were unable to recover it. 00:27:07.385 [2024-07-12 17:35:25.990704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.385 [2024-07-12 17:35:25.990733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.990885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.990914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.991197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.991226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.991426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.991436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.991643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.991653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.991814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.991826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.992028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.992058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.992255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.992285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.992515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.992553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.992715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.992725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.992860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.993897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.993906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.994957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.994967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.995174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.995184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.995341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.995351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.995517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.995527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.995754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.995783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.995935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.995964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.996079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.996108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.996360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.996401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.996653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.996682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.996817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.996847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.997174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.997204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.997449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.997489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.997698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.997708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.997870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.997880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.386 [2024-07-12 17:35:25.998148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.386 [2024-07-12 17:35:25.998178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.386 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.998434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.998465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.998763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.998773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.998932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.998942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:25.999816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:25.999825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.000981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.000991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.001183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.001192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.001435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.001466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.001602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.001632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.001840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.001869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.002000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.002029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.002239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.002269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.002483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.002514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.002651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.002680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.002886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.002915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.003175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.003204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.003494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.003525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.003752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.003780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.004033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.004063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.004246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.004276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.387 [2024-07-12 17:35:26.004516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.387 [2024-07-12 17:35:26.004526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.387 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.004686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.004696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.004886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.004916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.005146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.005176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.005459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.005470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.005703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.005713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.005871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.005881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.005981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.006130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.006140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.006297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.006307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.006456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.006642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.006671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.006844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.006873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.007936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.007945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.008970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.009070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.009080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.009314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.009324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.009490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.009500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.009615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.009625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.009788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.009798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.010062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.010319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.010471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.010593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.388 [2024-07-12 17:35:26.010764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.388 qpair failed and we were unable to recover it. 00:27:07.388 [2024-07-12 17:35:26.010868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.010877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.011952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.011962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.012196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.012224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.012490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.012521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.012715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.012725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.012903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.012932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.013214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.013244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.013463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.013504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.013711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.013721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.013890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.013900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.014974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.014985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.015920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.015930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.016938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.016947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.017160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.389 [2024-07-12 17:35:26.017170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.389 qpair failed and we were unable to recover it. 00:27:07.389 [2024-07-12 17:35:26.017330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.017340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.017439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.017449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.017620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.017630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.017774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.017783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.018959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.018969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.019836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.020109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.020119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.020356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.020365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.020637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.020648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.020823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.020833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.021050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.021080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.021280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.021309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.021559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.021590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.021753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.021764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.021920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.021930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.022141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.022175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.022373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.022414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.022554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.022564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.022723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.022733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.022892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.022902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.023060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.023069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.023253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.023263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.023470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.023481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.023580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.023590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.023830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.023839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.024069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.024089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.024267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.024276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.024518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.024528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.024689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.024709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.024837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.390 [2024-07-12 17:35:26.024866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.390 qpair failed and we were unable to recover it. 00:27:07.390 [2024-07-12 17:35:26.025104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.025134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.025333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.025362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.025596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.025606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.025789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.025799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.025954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.025963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.026278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.026308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.026518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.026549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.026701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.026710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.026858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.026867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.027121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.027131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.027392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.027403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.027632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.027642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.027829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.027839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.028012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.028022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.028246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.028256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.028515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.028545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.028752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.028781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.029084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.029114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.029349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.029385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.029536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.029565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.029786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.029796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.029954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.029964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.030218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.030522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.030552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.030690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.030700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.030816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.030828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.030940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.030950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.031174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.031183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.031422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.031596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.031606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.031781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.031810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.032040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.032070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.032341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.032370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.391 [2024-07-12 17:35:26.032587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.391 [2024-07-12 17:35:26.032617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.391 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.032766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.032776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.032935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.032946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.033232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.033242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.033484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.033592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.033602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.033711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.033721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.033886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.033895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.034925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.034935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.035086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.035096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.035323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.035333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.035533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.035544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.035658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.035668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.035873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.035883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.036216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.036225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.036383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.036394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.036603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.036613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.036736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.036746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.036923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.036933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.037942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.037952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.038119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.038129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.038381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.038391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.038554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.038565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.038716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.038726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.038839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.038849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.039963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.392 [2024-07-12 17:35:26.039973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.392 qpair failed and we were unable to recover it. 00:27:07.392 [2024-07-12 17:35:26.040125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.040301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.040463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.040631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.040728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.040841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.040850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.041085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.041112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.041368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.041406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.041667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.041677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.041775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.041784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.041901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.041911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.042117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.042127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.042393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.042403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.042621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.042631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.042746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.042755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.042909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.042918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.043019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.043028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.043242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.043251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.043436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.043446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.043659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.043669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.043827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.044072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.044082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.044323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.044333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.044578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.044588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.044767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.044777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.044871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.044880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.045987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.045999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.046159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.046169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.046403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.046414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.046578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.046588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.046688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.046697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.393 qpair failed and we were unable to recover it. 00:27:07.393 [2024-07-12 17:35:26.046806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.393 [2024-07-12 17:35:26.046816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.046923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.046933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.047801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.047810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.048081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.048109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.048296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.048326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.048595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.048626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.048827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.048837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.049072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.049083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.049310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.049321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.049550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.049562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.049730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.049741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.049888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.049898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.050020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.050031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.050261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.050271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.050442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.050453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.050616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.050626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.050837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.050870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.051054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.051070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.051249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.051263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.051436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.051450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.051575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.051588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.051874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.051888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.052022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.052035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.052218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.052252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.052389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.052421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.052641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.052868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.052881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.053126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.053139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.053334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.053348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.053531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.053545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.394 [2024-07-12 17:35:26.053713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.394 [2024-07-12 17:35:26.053727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.394 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.053984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.053998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.054268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.054297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.054490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.054522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.054710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.054740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.054939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.054968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.055248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.055277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.055503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.055516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.055771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.055781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.056920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.056930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.057882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.057891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.058082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.058092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.058331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.058360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.058506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.058536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.058751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.058780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.059946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.059956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.060140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.060150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.060298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.060327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.060604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.395 [2024-07-12 17:35:26.060635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.395 qpair failed and we were unable to recover it. 00:27:07.395 [2024-07-12 17:35:26.060925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.060935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.061184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.061194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.061428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.061438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.061579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.061588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.061759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.061769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.061918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.061929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.062171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.062181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.062267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.062276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.062511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.062522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.062680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.062690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.062839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.062850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.063100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.063110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.063276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.063285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.063440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.063450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.063623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.063633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.063815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.063825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.064005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.064016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.064264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.064274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.064434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.064446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.064682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.064693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.064855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.064864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.065799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.065809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.396 [2024-07-12 17:35:26.066044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.396 [2024-07-12 17:35:26.066054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.396 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.066324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.066334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.066498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.066508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.066690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.066700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.066882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.066892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.067110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.067120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.067329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.067339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.067561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.067572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.067712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.067722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.067872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.067882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.068125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.068135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.068347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.068357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.068557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.068567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.068719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.068729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.068964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.068974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.069205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.069370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.069386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.069574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.069585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.069738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.069748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.069997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.070985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.070995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.071966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.071975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.072070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.072079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.072303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.072314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.072564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.072575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.072734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.072744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.072897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.397 [2024-07-12 17:35:26.072908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.397 qpair failed and we were unable to recover it. 00:27:07.397 [2024-07-12 17:35:26.073095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.073915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.073925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.074952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.074964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.075938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.075951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.076126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.076136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.076350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.076360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.076522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.076532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.076638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.076648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.076803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.076813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.077939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.077948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.078818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.078828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.079157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.079168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.079318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.398 [2024-07-12 17:35:26.079328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.398 qpair failed and we were unable to recover it. 00:27:07.398 [2024-07-12 17:35:26.079492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.079502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.079710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.079719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.079879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.079889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.080921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.080931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.081824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.081834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.082906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.082916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.083867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.083877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.084190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.084200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.084407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.084417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.084531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.084541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.084629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.084638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.084846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.084856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.399 [2024-07-12 17:35:26.085075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.399 [2024-07-12 17:35:26.085085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.399 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.085318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.085327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.085499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.085510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.085670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.085682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.085844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.086864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.086873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.087235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.087245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.087478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.087488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.087583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.087755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.087764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.087919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.087930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.088904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.088914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.089986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.089996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.090246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.090256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.090424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.090435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.090587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.090596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.090744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.090754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.090857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.090866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.091047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.091057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.091307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.091317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.091514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.091524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.091629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.400 [2024-07-12 17:35:26.091638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.400 qpair failed and we were unable to recover it. 00:27:07.400 [2024-07-12 17:35:26.091750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.091759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.091859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.091869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.092945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.092954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.093107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.093137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.093357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.093398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.093582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.093611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.093751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.093761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.093923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.093933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.094107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.094117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.094302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.094312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.094487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.094497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.094647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.094658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.094840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.094850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.095049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.095059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.095287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.095297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.095476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.095487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.095586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.095596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.095736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.095746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.096838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.096846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.097029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.097038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.097201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.097211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.401 qpair failed and we were unable to recover it. 00:27:07.401 [2024-07-12 17:35:26.097425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.401 [2024-07-12 17:35:26.097436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.097528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.097536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.097733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.097743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.097835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.097845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.097927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.097936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.098845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.098854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.099766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.099992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.100278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.100441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.100566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.100787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.100890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.100899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.101940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.101950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.102913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.102923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.103068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.103078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.103230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.103240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.402 [2024-07-12 17:35:26.103392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.402 [2024-07-12 17:35:26.103403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.402 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.103510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.103520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.103668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.103678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.103838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.103847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.104849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.104859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.105952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.105962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.106090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.106099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.106182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.106191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.106335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.106345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.106635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.106665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.106847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.106875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.107086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.107115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.107387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.107417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.107619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.107648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.107848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.107972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.107982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.108862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.108872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.403 [2024-07-12 17:35:26.109959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.403 [2024-07-12 17:35:26.109968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.403 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.110169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.110179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.110330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.110341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.110612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.110643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.110853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.110882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.111029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.111058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.111263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.111292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.111498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.111529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.111793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.111830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.112054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.112064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.112273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.112283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.112431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.112441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.112594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.112604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.404 [2024-07-12 17:35:26.112723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.404 [2024-07-12 17:35:26.112733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.404 qpair failed and we were unable to recover it. 00:27:07.699 [2024-07-12 17:35:26.112888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.112899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.113131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.113145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.113353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.113363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.113464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.113473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.113629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.113638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.113896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.113906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.114091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.114101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.114274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.114284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.114485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.114495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.114710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.114720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.114835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.115850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.115995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.116934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.116943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.117908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.117918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.118815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.118825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.119027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.119037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.119267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.119276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.700 [2024-07-12 17:35:26.119505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.700 [2024-07-12 17:35:26.119516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.700 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.119676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.119685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.119846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.119858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.120844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.120854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.121102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.121111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.121223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.121233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.121466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.121477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.121642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.121652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.121859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.121870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.122817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.122827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.123968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.123977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.124058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.124068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.124256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.124480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.124514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.124629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.124645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.124861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.124875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.125033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.125049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.125145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.125160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.125400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.125412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.125623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.125633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.125799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.125809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.126036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.126045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.126229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.126239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.701 [2024-07-12 17:35:26.126437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.701 [2024-07-12 17:35:26.126448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.701 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.126687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.126698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.126919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.126931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.127127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.127137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.127349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.127359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.127607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.127617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.127776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.127786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.127973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.127982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.128229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.128239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.128443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.128453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.128643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.128653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.128813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.128823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.129051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.129061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.129238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.129249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.129459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.129471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.129649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.129659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.129817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.129828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.130898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.130908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.131086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.131096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.131304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.131314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.131487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.131498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.131659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.131669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.131892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.131901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.132060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.132070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.132292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.132311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.132434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.132449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.132664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.132678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.132839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.132853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.133900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.133914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.134149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.134163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.134407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.134421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.702 [2024-07-12 17:35:26.134579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.702 [2024-07-12 17:35:26.134593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.702 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.134754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.134767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.134947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.134961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.135154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.135167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.135287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.135301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.135589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.135604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.135843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.135857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.136938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.136952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.137926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.137941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.138185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.138199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.138479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.138494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.138617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.138630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.138736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.138749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.138965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.138978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.139942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.139955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.140123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.140136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.140306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.140319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.140429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.140443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.140547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.140560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.140800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.140813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.141923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.703 [2024-07-12 17:35:26.141936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.703 qpair failed and we were unable to recover it. 00:27:07.703 [2024-07-12 17:35:26.142198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.142211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.142406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.142422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.142650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.142663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.142858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.142872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.143139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.143153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.143326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.143339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.143519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.143533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.143654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.143667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.143882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.143895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.144872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.144886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.145133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.145146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.145362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.145376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.145517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.145531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.145725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.145738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.145970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.145984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.146178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.146191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.146342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.146356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.146608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.146622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.146785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.146798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.146965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.146978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.147167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.147179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.147281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.147294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.147521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.147535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.147801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.147817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.147992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.148006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.148263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.148277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.148501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.148515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.148691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.148704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.148920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.704 [2024-07-12 17:35:26.148934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.704 qpair failed and we were unable to recover it. 00:27:07.704 [2024-07-12 17:35:26.149174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.149187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.149344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.149358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.149461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.149474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.149693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.149706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.149887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.149900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.149998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.150098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.150383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.150577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.150710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.150897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.150907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.151055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.151065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.151271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.151467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.151477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.151662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.151672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.151940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.151950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.152183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.152193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.152282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.152291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.152483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.152494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.152724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.152734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.152910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.152920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.153934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.154895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.154905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.155980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.155989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.156233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.156242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.705 [2024-07-12 17:35:26.156452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.705 [2024-07-12 17:35:26.156462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.705 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.156618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.156628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.156759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.156769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.157884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.158165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.158179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.158394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.158408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.158503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.158516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.158689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.158702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.158865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.158879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.159131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.159145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.159329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.159342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.159515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.159529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.159692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.159706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.159876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.159890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.160006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.160019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.160205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.160218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.160523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.160543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.160652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.160667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.160833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.161956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.161969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.162961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.162975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.163149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.163162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.163313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.163326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.163489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.163503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.163655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.163669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.163841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.706 [2024-07-12 17:35:26.163854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.706 qpair failed and we were unable to recover it. 00:27:07.706 [2024-07-12 17:35:26.164016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.164246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.164427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.164554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.164674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.164854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.164867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.165160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.165189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.165557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.165623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.165852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.165866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.166130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.166143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.166389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.166404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.166572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.166586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.166801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.166814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.166935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.166948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.167118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.167131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.167364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.167382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.167594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.167607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.167731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.167744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.167904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.168195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.168208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.168375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.168399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.168511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.168524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.168738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.168751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.168915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.168928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.169055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.169068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.169254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.169267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.169457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.169471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.169710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.169723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.169936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.169949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.170131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.170145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.170360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.170374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.170569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.170583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.170758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.170884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.170898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.171059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.171073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.171233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.171246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.171487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.171519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.171811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.171840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.172146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.172159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.172318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.172332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.172559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.707 [2024-07-12 17:35:26.172573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.707 qpair failed and we were unable to recover it. 00:27:07.707 [2024-07-12 17:35:26.172789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.172802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.172968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.172981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.173255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.173284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.173490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.173520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.173667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.173697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.173919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.173933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.174097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.174116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.174291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.174488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.174503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.174736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.174750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.174912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.174925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.175157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.175170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.175412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.175428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.175538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.175552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.175800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.175814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.176918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.176931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.177162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.177175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.177334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.177348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.177546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.177562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.177738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.177752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.177933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.177946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.178207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.178221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.178375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.178395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.178574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.178588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.178682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.178695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.178856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.178869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.179096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.179110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.179382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.179396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.179518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.179535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.179768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.708 [2024-07-12 17:35:26.179782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.708 qpair failed and we were unable to recover it. 00:27:07.708 [2024-07-12 17:35:26.180099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.180112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.180351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.180364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.180512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.180525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.180707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.180717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.180895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.180905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.181849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.181993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.182897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.182907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.183938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.183949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.184929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.184938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.185939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.185947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.186120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.186132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.186239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.186248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.709 [2024-07-12 17:35:26.186458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.709 [2024-07-12 17:35:26.186469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.709 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.186628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.186639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.186749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.186758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.186912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.186921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.187884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.187998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.188008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.188171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.188181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.188285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.188295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.188455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.188465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.188726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.188756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.189030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.189060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.189365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.189404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.189560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.189590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.189901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.189911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.190913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.190923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.191106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.191116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.191370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.191409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.191532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.191561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.191780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.191809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.192108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.192118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.192288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.192298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.192544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.192554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.192721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.192731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.192913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.192922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.193212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.193241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.193438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.193469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.193667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.193696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.193885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.193915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.194148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.194160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.194264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.194274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.194395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.194405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.710 [2024-07-12 17:35:26.194555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.710 [2024-07-12 17:35:26.194565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.710 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.194649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.194658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.194761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.194770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.194955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.194965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.195927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.195936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.196973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.196982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.197212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.197222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.197375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.197396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.197611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.197621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.197740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.197749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.197916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.197926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.198826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.198855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.199073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.199103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.199372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.199411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.199536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.199565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.199752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.199761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.200928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.200941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.201186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.201196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.201301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.201311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.201526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.711 qpair failed and we were unable to recover it. 00:27:07.711 [2024-07-12 17:35:26.201766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.711 [2024-07-12 17:35:26.201776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.201941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.201951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.202109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.202138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.202305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.202334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.202528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.202765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.202795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.202924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.202954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.203138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.203167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.203298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.203328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.203543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.203574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.203769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.203798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.204077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.204106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.204304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.204333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.204609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.204640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.204795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.204824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.205088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.205118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.205371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.205388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.205633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.205643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.205809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.205819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.206045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.206075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.206202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.206231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.206436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.206467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.206690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.206719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.206882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.206912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.207115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.207145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.207442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.207472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.207670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.207700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.207895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.207924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.208938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.208947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.209118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.209128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.209274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.712 [2024-07-12 17:35:26.209284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.712 qpair failed and we were unable to recover it. 00:27:07.712 [2024-07-12 17:35:26.209457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.209486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.209739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.209768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.210097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.210133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.210393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.210424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.210615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.210644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.210824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.210834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.210942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.210951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.211134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.211144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.211316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.211327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.211595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.211605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.211704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.211713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.211798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.211807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.212869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.212899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.213129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.213158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.213295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.213325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.213577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.213588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.213679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.213688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.213869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.213878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.214058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.214088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.214218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.214247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.214438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.214469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.214765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.214794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.215137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.215166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.215449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.215479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.215683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.215711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.215852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.215862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.216147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.216176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.216428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.216597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.216627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.216879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.216908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.217134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.217181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.217476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.217507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.217659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.217688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.217919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.217929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.218114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.218123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.218330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.713 [2024-07-12 17:35:26.218342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.713 qpair failed and we were unable to recover it. 00:27:07.713 [2024-07-12 17:35:26.218531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.218716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.218745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.218910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.218939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.219189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.219218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.219491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.219522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.219663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.219691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.219838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.219867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.220125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.220155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.220365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.220401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.220652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.220682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.220823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.220833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.220988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.220998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.221139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.221149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.221371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.221390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.221631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.221661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.221800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.221830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.222941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.222951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.223126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.223136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.223290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.223299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.223435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.223446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.223705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.223715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.223889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.223899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.224108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.224118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.224366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.224376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.224564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.224575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.224783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.224793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.224895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.224904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.225051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.225060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.225206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.225216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.225433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.225444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.225599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.225609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.225820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.225850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.226088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.226117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.226365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.226403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.226569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.226603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.714 [2024-07-12 17:35:26.226825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.714 [2024-07-12 17:35:26.226835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.714 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.226996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.227005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.227161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.227171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.227361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.227407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.227636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.227670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.227923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.227952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.228080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.228089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.228321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.228350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.228565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.228595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.228738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.228767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.229880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.229890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.230056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.230066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.230272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.230532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.230542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.230700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.230737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.231030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.231059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.231267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.231296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.231574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.231604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.231809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.231839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.232147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.232176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.232445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.232476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.232730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.232797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.233030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.233063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.233294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.233324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.233621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.233654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.233859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.233889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.234040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.234069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.234350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.234392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.234519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.234548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.234752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.234782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.234985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.235249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.235262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.235528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.235542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.235696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.235709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.235878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.235914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.236193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.236223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.715 qpair failed and we were unable to recover it. 00:27:07.715 [2024-07-12 17:35:26.236473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.715 [2024-07-12 17:35:26.236503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.236793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.236822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.237147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.237176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.237464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.237494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.237699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.237729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.237973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.237986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.238149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.238162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.238426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.238457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.238656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.238687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.238832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.238845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.239066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.239095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.239295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.239324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.239611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.239642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.239938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.239968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.240247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.240260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.240455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.240469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.240687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.240701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.240918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.240932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.241044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.241058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.241318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.241333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.241435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.241449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.241623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.241655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.241912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.241941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.242195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.242225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.242478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.242507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.242785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.242819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.243099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.243129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.243419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.243449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.243677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.243707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.243911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.243939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.244134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.244164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.244402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.244416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.244603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.244617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.244791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.244805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.244889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.244901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.245144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.245173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.245451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.245482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.245634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.245664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.245797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.245826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.246018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.246047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.246296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.716 [2024-07-12 17:35:26.246309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.716 qpair failed and we were unable to recover it. 00:27:07.716 [2024-07-12 17:35:26.246468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.246482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.246646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.246675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.246882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.246911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.247205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.247252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.247527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.247557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.247833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.247876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.248095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.248108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.248347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.248361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.248583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.248597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.248770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.248934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.248947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.249208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.249391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.249405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.249571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.249585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.249696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.249710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.249857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.249871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.250100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.250113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.250278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.250292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.250528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.250541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.250641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.250656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.250845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.250859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.251021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.251035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.251195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.251208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.251427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.251440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.251611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.251653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.251875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.251904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.252240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.252269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.252480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.252511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.252699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.252728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.252948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.252961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.253178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.253191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.253462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.253476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.253639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.253652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.253818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.253831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.717 [2024-07-12 17:35:26.254126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.717 [2024-07-12 17:35:26.254155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.717 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.254410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.254440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.254634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.254663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.254841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.254855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.255135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.255164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.255443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.255474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.255681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.255710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.255848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.255877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.256072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.256102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.256374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.256412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.256605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.256635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.256840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.256869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.257056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.257085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.257292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.257306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.257468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.257483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.257648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.257662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.257933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.257946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.258200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.258214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.258480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.258494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.258681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.258694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.258877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.258890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.259057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.259070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.259256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.259270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.259504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.259518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.259686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.259699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.259944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.259974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.260183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.260211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.260415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.260445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.260583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.260612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.260884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.260913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.261098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.261114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.261292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.261306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.261563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.261577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.261869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.261898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.262214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.262243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.262448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.262479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.262729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.262758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.263084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.263114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.263390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.263404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.263586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.263600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.263702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.263714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.718 [2024-07-12 17:35:26.263904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.718 [2024-07-12 17:35:26.263917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.718 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.264091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.264302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.264331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.264572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.264602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.264872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.264901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.265164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.265177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.265285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.265299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.265467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.265481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.265697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.265710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.265863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.265876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.266117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.266146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.266398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.266641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.266671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.266852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.266866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.267099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.267128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.267406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.267437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.267587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.267927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.267957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.268228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.268258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.268463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.268493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.268697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.268727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.268989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.269003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.269186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.269199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.269373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.269424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.269656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.269685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.269908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.269938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.270191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.270204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.270457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.270472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.270641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.270655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.270783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.270799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.270913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.270926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.271089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.271102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.271326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.271355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.271634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.271891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.271931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.272153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.272166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.272398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.272412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.272673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.272686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.272862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.272876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.273002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.273205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.719 [2024-07-12 17:35:26.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.719 qpair failed and we were unable to recover it. 00:27:07.719 [2024-07-12 17:35:26.273484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.273498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.273681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.273695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.273881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.273910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.274113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.274142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.274416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.274447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.274655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.274684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.274941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.275152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.275166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.275413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.275443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.275672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.275702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.275846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.275875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.276059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.276288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.276302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.276544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.276557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.276784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.276813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.277118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.277148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.277426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.277458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.277711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.277741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.278023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.278053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.278296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.278310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.278536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.278550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.278769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.278783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.278957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.278970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.279179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.279208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.279342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.279372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.279632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.279662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.279926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.279955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.280150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.280179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.280322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.280337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.280569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.280600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.280873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.280902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.281105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.281135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.281438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.281469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.281619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.281647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.281920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.281947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.282210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.282224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.282443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.282457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.282675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.282689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.282926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.282940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.283166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.283179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.283354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.720 [2024-07-12 17:35:26.283367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.720 qpair failed and we were unable to recover it. 00:27:07.720 [2024-07-12 17:35:26.283551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.283565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.283756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.283770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.283938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.283951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.284199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.284228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.284433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.284464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.284612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.284642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.284918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.284947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.285200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.285229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.285423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.285438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.285655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.285668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.285756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.285768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.285919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.285933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.286094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.286108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.286327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.286340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.286585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.286599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.286815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.286828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.287001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.287014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.287282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.287311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.287497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.287527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.287800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.287829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.288119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.288132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.288348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.288361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.288531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.288555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.288800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.288813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.289938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.289951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.290114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.290128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.290292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.290305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.290400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.290413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.290639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.290652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.290950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.290979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.721 [2024-07-12 17:35:26.291177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.721 [2024-07-12 17:35:26.291206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.721 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.291478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.291508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.291699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.291934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.291963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.292227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.292240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.292448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.292462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.292732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.292746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.292988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.293001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.293222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.293236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.293504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.293519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.293759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.293773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.293995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.294008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.294276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.294290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.294507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.294521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.294690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.294703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.294969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.294983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.295167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.295180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.295348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.295362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.295623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.295637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.295858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.295872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.296088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.296102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.296343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.296356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.296587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.296601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.296785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.296799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.296906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.296920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.297161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.297175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.297418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.297432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.297528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.297541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.297767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.297780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.298025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.298039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.298207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.298221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.298404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.298418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.298649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.298665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.298835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.298848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.299045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.299059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.299292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.299306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.299467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.299481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.299698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.299711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.299941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.299954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.300275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.300288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.300454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.300467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.722 [2024-07-12 17:35:26.300710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.722 [2024-07-12 17:35:26.300723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.722 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.300999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.301012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.301279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.301292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.301511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.301525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.301767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.301780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.302006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.302019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.302281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.302295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.302460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.302474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.302713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.302727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.302898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.302911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.303147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.303161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.303311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.303325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.303432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.303448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.303547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.303561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.303807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.303821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.304082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.304096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.304302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.304315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.304536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.304550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.304822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.304856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.305164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.305198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.305461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.305488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.305751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.305763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.306018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.306028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.306247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.306257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.306435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.306445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.306608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.306618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.306792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.306802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.307017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.307235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.307245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.307399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.307410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.307651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.307661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.307890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.307904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.308152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.308161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.308322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.308332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.308569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.308579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.308845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.308855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.309085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.309094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.309238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.309248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.309488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.309499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.309602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.309614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.309860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.309870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.723 qpair failed and we were unable to recover it. 00:27:07.723 [2024-07-12 17:35:26.310028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.723 [2024-07-12 17:35:26.310038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.310199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.310208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.310417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.310428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.310515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.310524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.310767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.310777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.310988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.310998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.311156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.311166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.311397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.311407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.311633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.311642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.311785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.311795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.311975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.311985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.312219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.312229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.312371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.312384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.312491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.312501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.312605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.312615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.312851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.312861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.313022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.313032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.313132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.313151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.313351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.313366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.313527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.313541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.313777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.313791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.314012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.314025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.314297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.314310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.314502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.314516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.314699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.314713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.314959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.315190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.315203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.315368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.315386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.315550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.315564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.315780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.315794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.315992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.316006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.316228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.316241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.316457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.316471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.316640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.316654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.316838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.316851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.316997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.317010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.317117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.317131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.317344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.317357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.317603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.317618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.317869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.317882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.318028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.318041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.724 qpair failed and we were unable to recover it. 00:27:07.724 [2024-07-12 17:35:26.318210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.724 [2024-07-12 17:35:26.318223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.318468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.318482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.318629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.318642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.318822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.318838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.319113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.319142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.319361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.319398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.319666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.319679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.319872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.319885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.320070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.320084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.320322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.320335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.320503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.320517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.320706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.320735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.320946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.320976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.321176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.321206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.321439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.321452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.321554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.321570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.321731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.321744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.321986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.322935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.322949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.323108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.323121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.323351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.323388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.323612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.323642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.323871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.323901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.324083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.324097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.324331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.324344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.324559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.324576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.324745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.324759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.324926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.324940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.325100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.325113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.325349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.325362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.325615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.725 [2024-07-12 17:35:26.325630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.725 qpair failed and we were unable to recover it. 00:27:07.725 [2024-07-12 17:35:26.325790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.325804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.325991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.326005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.326092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.326104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.326351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.326389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.326620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.326649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.326897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.326926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.327186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.327200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.327367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.327384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.327568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.327582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.327751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.327764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.328009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.328023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.328241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.328511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.328525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.328768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.328781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.328943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.328957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.329132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.329146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.329360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.329373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.329642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.329655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.329822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.329836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.329931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.329943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.330135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.330148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.330389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.330405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.330574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.330587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.330828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.330857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.331053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.331082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.331274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.331303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.331545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.331559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.331773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.331786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.331970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.331983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.332200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.332230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.332524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.332554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.332842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.332871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.333084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.333112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.333315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.333344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.333524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.333538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.333649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.333662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.333814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.333827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.334066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.334079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.334183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.334196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.334427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.334458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.334774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.726 [2024-07-12 17:35:26.334803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.726 qpair failed and we were unable to recover it. 00:27:07.726 [2024-07-12 17:35:26.335009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.335038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.335581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.335933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.335965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.336238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.336252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.336496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.336510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.336696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.336709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.336874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.336904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.337105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.337135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.337395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.337425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.337684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.337697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.337931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.337945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.338190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.338203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.338448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.338462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.338630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.338643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.338801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.338815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.339059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.339073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.339235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.339248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.339469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.339500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.339776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.339805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.340111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.340417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.340447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.340715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.340782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.341090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.341123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.341404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.341419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.341658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.341672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.341841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.341855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.342006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.342019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.342256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.342270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.342430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.342444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.342611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.342642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.342900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.343209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.343238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.343484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.343515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.343790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.343819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.344036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.344076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.344355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.344397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.344676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.344706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.344985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.345014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.345243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.345273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.345496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.345757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.727 [2024-07-12 17:35:26.345787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.727 qpair failed and we were unable to recover it. 00:27:07.727 [2024-07-12 17:35:26.345984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.346014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.346236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.346265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.346465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.346495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.346700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.346729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.346916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.346946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.347235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.347264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.347552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.347583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.347862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.347892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.348166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.348195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.348382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.348395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.348594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.348623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.348790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.348820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.349098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.349127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.349401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.349416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.349575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.349588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.349736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.349749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.349856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.349870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.350017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.350031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.350181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.350195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.350443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.350457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.350700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.350713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.350877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.350890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.351119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.351149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.351400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.351430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.351634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.351664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.351866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.351895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.352171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.352200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.352498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.352528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.352793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.352806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.352958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.353138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.353168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.353444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.353474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.353616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.353645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.353920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.353949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.354208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.354238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.354489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.354519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.354703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.354717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.354959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.354988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.355236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.355265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.355532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.355562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.355828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.728 [2024-07-12 17:35:26.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.728 qpair failed and we were unable to recover it. 00:27:07.728 [2024-07-12 17:35:26.356124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.356137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.356386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.356400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.356644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.356657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.356842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.356856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.357099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.357113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.357299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.357328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.357627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.357658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.357802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.357832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.358015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.358044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.358256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.358285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.358545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.358559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.358749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.358763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.359002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.359265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.359295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.359499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.359529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.359827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.359857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.360107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.360137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.360340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.360369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.360572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.360602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.360875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.360911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.361185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.361214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.361424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.361438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.361676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.361690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.361905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.361918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.362098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.362112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.362353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.362393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.362668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.362697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.362900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.362929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.363106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.363120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.363301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.363330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.363562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.363593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.363796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.363825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.364081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.364111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.364366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.364407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.364620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.364649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.364924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.364954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.365250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.729 [2024-07-12 17:35:26.365285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.729 qpair failed and we were unable to recover it. 00:27:07.729 [2024-07-12 17:35:26.365526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.365540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.365705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.365719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.365960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.365974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.366206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.366219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.366387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.366402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.366647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.366676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.366933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.366963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.367151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.367181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.367322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.367350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.367574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.367604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.367911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.367941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.368156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.368185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.368431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.368462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.368730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.369059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.369088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.369283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.369313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.369578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.369609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.369742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.369771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.370042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.370072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.370254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.370267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.370387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.370400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.370614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.370628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.370865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.370880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.371137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.371151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.371387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.371401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.371641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.371654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.371907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.371920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.372156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.372362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.372375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.372597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.372611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.372781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.372811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.373063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.373262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.373290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.373487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.373501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.373747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.373777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.374047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.374076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.374369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.374387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.374635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.374880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.730 [2024-07-12 17:35:26.374894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.730 qpair failed and we were unable to recover it. 00:27:07.730 [2024-07-12 17:35:26.375109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.375122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.375216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.375229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.375442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.375456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.375695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.375708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.375816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.375829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.376009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.376023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.376180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.376192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.376426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.376457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.376710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.376739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.377016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.377045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.377308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.377338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.377607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.377638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.377913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.377942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.378158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.378187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.378383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.378397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.378630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.378644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.378823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.378836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.379078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.379092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.379317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.379346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.379543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.379573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.379853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.379882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.380120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.380149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.380425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.380463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.380744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.380760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.380791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ed000 (9): Bad file descriptor 00:27:07.731 [2024-07-12 17:35:26.381203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.381271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.381573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.381609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.381889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.381919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.382069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.382099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.382394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.382408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.382578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.382592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.382691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.382707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.382869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.382883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.383121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.383134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.383375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.383392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.383545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.383558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.383668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.384017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.384097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.384355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.384404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.384635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.384666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.384944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.384975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.731 [2024-07-12 17:35:26.385235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.731 [2024-07-12 17:35:26.385265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.731 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.385449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.385464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.385680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.385693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.385862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.385875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.386037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.386067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.386261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.386290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.386508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.386521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.386764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.386793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.387063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.387092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.387393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.387424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.387626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.387656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.387939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.387968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.388265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.388293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.388506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.388536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.388724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.388754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.389003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.389032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.389307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.389320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.389562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.389576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.389791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.389805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.389966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.390002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.390200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.390229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.390500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.390540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.390780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.390793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.390971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.390987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.391223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.391252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.391509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.391540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.391741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.391770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.391965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.391993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.392259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.392288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.392506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.392530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.392776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.392789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.393006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.393019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.393281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.393294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.393534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.393548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.393775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.393789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.393956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.393969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.394168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.394198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.394476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.394507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.394809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.394839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.395029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.395059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.395310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.395339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.732 [2024-07-12 17:35:26.395626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.732 [2024-07-12 17:35:26.395640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.732 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.395868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.395881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.396030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.396044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.396201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.396214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.396439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.396453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.396675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.396689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.396954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.396968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.397234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.397247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.397408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.397422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.397666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.397706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.397973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.398002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.398269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.398298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.398596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.398626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.398868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.398898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.399195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.399224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.399498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.399511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.399744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.399757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.400002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.400016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.400232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.400245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.400464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.400495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.400698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.400930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.400959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.401156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.401185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.401338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.401367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.401642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.401656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.401874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.401887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.402034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.402047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.402290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.402320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.402610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.402641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.402891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.402920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.403144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.403173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.403374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.403423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.403710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.403723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.403939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.403952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.404171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.404184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.404426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.404457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.404653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.404689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.404824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.404853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.405125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.405154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.405425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.405439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.405654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.405667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.733 [2024-07-12 17:35:26.405884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.733 [2024-07-12 17:35:26.405898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.733 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.406138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.406151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.406405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.406419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.406570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.406584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.406825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.406855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.407108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.407137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.407346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.407375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.407653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.407683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.407985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.408015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.408296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.408325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.408627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.408657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.408881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.408909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.409160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.409190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.409453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.409484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.409809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.409838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.410059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.410087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.410328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.410358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.410635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.410648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.410741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.410753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.410918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.410932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.411157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.411171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.411320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.411333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.411495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.411510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.411727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.411740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.411963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.411977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.412088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.412101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.412340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.412354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.412522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.412536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.412728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.412742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.412939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.412969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.413245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.413274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.413540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.413554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.413800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.413814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.414056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.414282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.414295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.734 [2024-07-12 17:35:26.414585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.734 [2024-07-12 17:35:26.414599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.734 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.414824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.414837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.415105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.415118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.415355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.415368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.415620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.415633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.415713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.415726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.415995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.416008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.416174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.416340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.416354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.416623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.416653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.416874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.416904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.417108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.417137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.417434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.417448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.417718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.417731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.417950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.417963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.418225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.418238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.418417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.418431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.418636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.418665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.418890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.418919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.419117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.419145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.419331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.419360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.419646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.419660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.419882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.419911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.420172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.420200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.420394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.420425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.420622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.420651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.420837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.420867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.421090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.421119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.421370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.421415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.421565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.421579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.421785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.421813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.422089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.422118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.422402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.422433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.422690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.422720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.423023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.423335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.423364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.423662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.423676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.423842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.423855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.424012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.424026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.424298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.424328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.424561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.424591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.424873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.735 [2024-07-12 17:35:26.424902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.735 qpair failed and we were unable to recover it. 00:27:07.735 [2024-07-12 17:35:26.425186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.425216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.425495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.425509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.425695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.425708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.425951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.425980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.426231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.426260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.426537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.426550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.426767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.426780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.426929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.426943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.427175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.427188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.427410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.427424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.427542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.427556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.427782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.427795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.427963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.427977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.428227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.428261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.428396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.428678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.428707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.428983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.429013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.429238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.429267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.429401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.429442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.429686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.429699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.429939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.429952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.430134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.430148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.430320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.430334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.430573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.430588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.430757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.430770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.430923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.430936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.431119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.431133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.431393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.431408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.431591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.431605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.431786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.431815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.432065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.432094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.432373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.432413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.432641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.432670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.432802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.432831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.433080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.433109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.433317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.433347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.433537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.433567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.433757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.433786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.434070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.434099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.434390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.434422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.434614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.736 [2024-07-12 17:35:26.434643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.736 qpair failed and we were unable to recover it. 00:27:07.736 [2024-07-12 17:35:26.434845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.434874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.435150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.435178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.435464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.435478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.435716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.435729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.435988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.436001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.436245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.436258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.436486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.436739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.436753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.436905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.436919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.437087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.437100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.437363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.437398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.437583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.437612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.437860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.437889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.438209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.438262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.438498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.438510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.438678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.438689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.438878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.438908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.439192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.439222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.439467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.439498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.439779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.439809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.440026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.440056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.440357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.440395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.440583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.440594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.440835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.440865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.441139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.441168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.441452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.441462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.441644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.441659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.441765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.441774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.442016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.442288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.442483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.442579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.442767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.442985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.443014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.443276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.443305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.443558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.443588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.443841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.443871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.444070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.444099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.444374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.444411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.444658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.444668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.444783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.737 [2024-07-12 17:35:26.444793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.737 qpair failed and we were unable to recover it. 00:27:07.737 [2024-07-12 17:35:26.445045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.445218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.445228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.445382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.445402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.445561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.445570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.445800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.445810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.446012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.446022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.446249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.446278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.446467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.446498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.446724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.446753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.446946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.446975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.447173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.447203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.447451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.447461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.447701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.447711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.447800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.447809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.447990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.448000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.448168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.448178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.448434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.448748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.448778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.449029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.449059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.449260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.449289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.449570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.449601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.449818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.449847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.450096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.450126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.450403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.450434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.450720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.450750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.450998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.451033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.451219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.451249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.451572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.451583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.451855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.451885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.452122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.452151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.452343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.452373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.452696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.452983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.452993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.453090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.453099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.453311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.453320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.738 qpair failed and we were unable to recover it. 00:27:07.738 [2024-07-12 17:35:26.453555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.738 [2024-07-12 17:35:26.453586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.453746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.453776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.453987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.454015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.454297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.454326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.454567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.454598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.454799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.454829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.454976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.455005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.455310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.455338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.455640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.455671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.455905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.455934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.456087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.456116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:07.739 [2024-07-12 17:35:26.456412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.739 [2024-07-12 17:35:26.456443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:07.739 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.456646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.456676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.456953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.456985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.457233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.457263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.457468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.457498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.457734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.457763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.458021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.458088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.458369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.458393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.458614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.458644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.458845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.458875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.459147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.459177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.459402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.459433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.459572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.459602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.459789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.459817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.460026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.460056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.460331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.460360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.460504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.460518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.460748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.460761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.460999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.461029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.461263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.461292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.461528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.461559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.461818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.461847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.462122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.462152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.462340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.462369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.462581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.462611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.462756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.462992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.463021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.463226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.463255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.463533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.463564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.463815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.463844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.464049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.464063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.464301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.464314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.464486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.464726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.464756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.465048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.465077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.465298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.465327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.465610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.465641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.465826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.465854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.012 qpair failed and we were unable to recover it. 00:27:08.012 [2024-07-12 17:35:26.466050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.012 [2024-07-12 17:35:26.466079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.466304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.466334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.466598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.466628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.466930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.466959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.467237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.467265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.467552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.467583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.467863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.467877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.468027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.468040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.468229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.468244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.468508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.468539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.468811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.468840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.469033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.469063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.469286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.469316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.469607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.469621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.469859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.469873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.470106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.470119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.470333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.470346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.470528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.470558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.470841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.470870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.471051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.471081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.471360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.471404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.471610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.472129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.472329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.472343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.472566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.472581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.472827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.472840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.473105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.473119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.473342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.473355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.473528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.473542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.473741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.473770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.473982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.474011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.474202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.474232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.474488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.474518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.474653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.474682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.474958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.474971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.475149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.475163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.475351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.475364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.013 [2024-07-12 17:35:26.475552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.013 [2024-07-12 17:35:26.475565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.013 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.475790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.475819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.476013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.476042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.476263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.476292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.476581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.476611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.476797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.476826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.477029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.477058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.477348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.477383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.477667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.477696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.477816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.477845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.478102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.478136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.478395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.478425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.478642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.478671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.478870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.478899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.479176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.479206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.479418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.479448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.479727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.479757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.479893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.479906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.480067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.480080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.480241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.480254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.480440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.480454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.480626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.480640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.480881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.481135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.481148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.481341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.481355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.481483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.481496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.481668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.481682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.481850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.481864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.482110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.482140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.482413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.482443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.482759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.482788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.482982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.482996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.483172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.483201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.483452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.483483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.483737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.483767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.484022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.484051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.484330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.484359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.484594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.484624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.484900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.014 [2024-07-12 17:35:26.484913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.014 qpair failed and we were unable to recover it. 00:27:08.014 [2024-07-12 17:35:26.485175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.485188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.485432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.485446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.485608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.485622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.485770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.485783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.486025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.486039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.486304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.486317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.486533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.486547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.486715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.486753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.487029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.487058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.487208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.487237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.487534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.487565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.487782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.487816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.488012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.488041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.488289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.488319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.488640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.488926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.488939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.489089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.489103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.489346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.489359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.489586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.489600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.489855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.489868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.490083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.490096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.490266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.490279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.490445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.490458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.490707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.490720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.490906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.490920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.491138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.491152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.491348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.491361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.491544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.491575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.491801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.491831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.492030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.492059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.492262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.492291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.492498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.492755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.492768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.493007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.493021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.493189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.493203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.493467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.493497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.493750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.493779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.494037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.494067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.494372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.494413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.494676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.494705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.015 qpair failed and we were unable to recover it. 00:27:08.015 [2024-07-12 17:35:26.494925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.015 [2024-07-12 17:35:26.494954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.495205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.495234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.495514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.495545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.495811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.495825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.496074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.496088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.496246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.496260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.496499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.496523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.496699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.496712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.497012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.497041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.497346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.497375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.497589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.497618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.497835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.497870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.498067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.498096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.498279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.498308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.498590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.498604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.498836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.498864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.499090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.499119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.499319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.499348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.499547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.499578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.499849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.499863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.500100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.500113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.500356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.500369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.500615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.500629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.500782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.500795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.501039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.501068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.501365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.501405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.501684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.501714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.501998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.502027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.502308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.502566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.502597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.502855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.502868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.503036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.503049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.503315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.016 [2024-07-12 17:35:26.503344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.016 qpair failed and we were unable to recover it. 00:27:08.016 [2024-07-12 17:35:26.503600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.503631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.503936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.503950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.504165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.504179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.504402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.504416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.504706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.504735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.505025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.505055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.505336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.505365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.505626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.505655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.505971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.505984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.506161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.506175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.506393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.506407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.506577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.506590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.506835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.506849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.506999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.507012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.507234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.507263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.507541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.507572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.507870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.507899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.508179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.508208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.508342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.508376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.508630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.508644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.508881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.508894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.509110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.509123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.509277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.509290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.509539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.509570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.509781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.509811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.510012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.510042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.510318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.510347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.510606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.510635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.510904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.510917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.511171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.511184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.511302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.511316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.511474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.511488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.511773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.511787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.512051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.512065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.512281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.512303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.512470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.512484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.512734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.512763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.512965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.512994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.513245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.513274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.513571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.017 [2024-07-12 17:35:26.513602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.017 qpair failed and we were unable to recover it. 00:27:08.017 [2024-07-12 17:35:26.513733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.513763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.514034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.514048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.514263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.514276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.514540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.514553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.514712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.514726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.514833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.514847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.515088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.515102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.515271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.515284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.515488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.515502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.515678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.515712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.515924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.515954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.516085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.516366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.516405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.516656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.516686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.516989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.517002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.517086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.517099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.517337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.517350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.517633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.517647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.517887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.517903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.518070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.518083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.518325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.518339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.518508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.518673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.518686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.518926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.518955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.519235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.519265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.519411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.519441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.519569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.519582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.519736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.519749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.520000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.520029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.520330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.520359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.520557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.520572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.520763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.520777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.521021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.521034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.521195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.521208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.521403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.521434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.521710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.521740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.522029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.522042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.522210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.522223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.522412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.522447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.522724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.522753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.018 [2024-07-12 17:35:26.523057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.018 [2024-07-12 17:35:26.523086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.018 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.523310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.523339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.523591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.523605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.523865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.523878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.524900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.524913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.525143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.525387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.525401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.525581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.525610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.525809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.525838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.526118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.526147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.526349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.526387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.526523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.526552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.526795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.526808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.527036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.527052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.527270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.527283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.527521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.527535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.527785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.527798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.527975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.527988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.528171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.528200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.528453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.528483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.528699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.528728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.528928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.528943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.529127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.529140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.529245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.529258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.529474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.529488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.529602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.529616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.529880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.529909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.530170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.530200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.530458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.530491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.530661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.530674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.530940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.530970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.531171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.531200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.531350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.531410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.531637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.531667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.531891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.531921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.019 [2024-07-12 17:35:26.532203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.019 [2024-07-12 17:35:26.532232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.019 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.532464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.532495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.532772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.532801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.533952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.533989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.534135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.534164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.534366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.534414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.534625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.534639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.534770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.534783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.535021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.535035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.535203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.535217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.535456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.535471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.535754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.535785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.535997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.536027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.536329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.536366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.536637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.536670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.536883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.536897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.537129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.537142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.537359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.537373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.537539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.537553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.537726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.537739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.537926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.537940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.538110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.538124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.538399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.538413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.538597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.538610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.538798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.538811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.538930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.538944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.539102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.539115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.539370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.539389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.539572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.539585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.539752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.539765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.020 [2024-07-12 17:35:26.539922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.020 [2024-07-12 17:35:26.539935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.020 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.540189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.540203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.540351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.540364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.540535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.540549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.540816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.540829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.541000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.541014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.541256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.541270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.541559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.541573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.541742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.541756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.541980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.541994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.542187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.542205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.542465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.542490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.542608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.542851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.542861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.543971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.543981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.544153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.544163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.544325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.544335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.544591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.544602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.544687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.544700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.544851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.544861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.545967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.545977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.546231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.546241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.546425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.546436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.546600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.546610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.546844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.546855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.547107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.547118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.547272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.547282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.547433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.547443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.547658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.547668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.547927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.021 [2024-07-12 17:35:26.547937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.021 qpair failed and we were unable to recover it. 00:27:08.021 [2024-07-12 17:35:26.548191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.548203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.548397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.548615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.548624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.548776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.548786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.548937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.548947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.549154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.549163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.549303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.549313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.549459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.549474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.549694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.549704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.549900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.549911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.550095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.550105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.550267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.550277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.550497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.550509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.550675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.550685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.550918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.550928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.551097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.551107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.551317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.551327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.551585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.551595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.551814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.551843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.552059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.552088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.552267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.552296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.552505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.552537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.552805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.552839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.553030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.553039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.553271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.553281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.553502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.553512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.553716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.553726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.553932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.553941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.554132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.554141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.554282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.554291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.554577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.554607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.554799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.554829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.555117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.555127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.555361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.555370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.555538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.555549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.555711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.555721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.555987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.556016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.556215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.556244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.556518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.556548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.556824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.556834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.557120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.022 [2024-07-12 17:35:26.557130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.022 qpair failed and we were unable to recover it. 00:27:08.022 [2024-07-12 17:35:26.557289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.557299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.557477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.557487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.557630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.557639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.557799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.557809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.558874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.558884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.559951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.559960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.560146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.560156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.560379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.560420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.560669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.560699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.560900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.560929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.561105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.561117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.561294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.561623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.561654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.561793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.561822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.562110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.562121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.562337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.562347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.562556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.562567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.562733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.562743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.563982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.563992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.564150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.564159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.564260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.564269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.564428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.564438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.564689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.564699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.564920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.564930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.023 [2024-07-12 17:35:26.565103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.023 [2024-07-12 17:35:26.565113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.023 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.565216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.565228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.565327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.565336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.565564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.565576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.565755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.565765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.565923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.565933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.566203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.566213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.566446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.566456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.566616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.566626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.566785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.566795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.567892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.567902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.568076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.568086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.568323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.568333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.568571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.568581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.568753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.568762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.568865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.568875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.569151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.569161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.569269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.569278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.569440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.569451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.569614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.569624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.569832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.569842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.570014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.570024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.570256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.570266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.570493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.570504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.570621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.570631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.570880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.570890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.571959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.571969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.572164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.572175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.572330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.572340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.572553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.572563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.572733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.024 [2024-07-12 17:35:26.572742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.024 qpair failed and we were unable to recover it. 00:27:08.024 [2024-07-12 17:35:26.572835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.572845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.572954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.572963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.573212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.573222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.573432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.573442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.573595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.573605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.573782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.573792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.573950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.573960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.574136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.574147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.574382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.574393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.574554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.574564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.574742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.574752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.574944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.574954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.575121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.575131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.575364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.575374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.575591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.575600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.575775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.575785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.575917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.575927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.576920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.576930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.577186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.577196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.577361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.577370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.577544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.577554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.577736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.577746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.577896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.577906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.578110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.578120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.578366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.578383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.578492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.578502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.578681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.578691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.578837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.578847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.579088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.579117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.025 [2024-07-12 17:35:26.579352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.025 [2024-07-12 17:35:26.579392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.025 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.579596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.579625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.579817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.579826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.580881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.580891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.581132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.581142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.581385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.581395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.581556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.581566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.581724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.581734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.581877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.581889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.582115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.582125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.582288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.582298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.582517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.582527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.582756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.582766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.582975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.582985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.583163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.583173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.583353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.583363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.583595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.583605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.583762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.583772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.583928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.583938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.584161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.584171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.584398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.584409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.584661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.584858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.584868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.585966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.585975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.586077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.586086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.586296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.586306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.586540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.586550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.586703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.586713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.586942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.586952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.587126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.587136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.026 [2024-07-12 17:35:26.587299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.026 [2024-07-12 17:35:26.587309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.026 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.587506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.587517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.587675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.587685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.587891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.587902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.588041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.588050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.588277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.588286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.588541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.588551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.588665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.588675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.588823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.588833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.589061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.589071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.589302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.589312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.589538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.589549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.589698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.589707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.589892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.589903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.590866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.590876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.591130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.591140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.591354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.591364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.591454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.591463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.591721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.591731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.591964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.591974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.592158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.592168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.592328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.592338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.592586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.592628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.592905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.592935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.593156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.593182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.593336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.593345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.593499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.593510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.593673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.593683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.593845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.593855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.594102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.594131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.594402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.594433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.594734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.594744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.594952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.594962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.595140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.595149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.595295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.595305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.595516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.595527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.027 qpair failed and we were unable to recover it. 00:27:08.027 [2024-07-12 17:35:26.595682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.027 [2024-07-12 17:35:26.595692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.595860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.595870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.595966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.595975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.596214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.596223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.596454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.596465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.596670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.596679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.596824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.596834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.596987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.596997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.597156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.597166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.597395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.597405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.597624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.597633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.597800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.598058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.598092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.598374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.598427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.598684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.598714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.599017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.599027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.599262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.599272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.599451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.599462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.599633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.599642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.599861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.599870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.600127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.600157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.600410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.600442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.600735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.600746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.600910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.600920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.601940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.601950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.602963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.602972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.603066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.603075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.603234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.603243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.028 [2024-07-12 17:35:26.603331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.028 [2024-07-12 17:35:26.603340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.028 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.603480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.603490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.603653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.603663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.603742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.603751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.603839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.603848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.603988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.603997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.604835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.604845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.605062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.605097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.605280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.605310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.605446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.605477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.605775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.605805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.605942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.605972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.606163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.606193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.606425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.606456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.606595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.606624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.606882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.606912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.607131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.607161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.607438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.607468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.607709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.607739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.607934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.607964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.608989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.608999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.609150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.609160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.609299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.609309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.609465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.609475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.609580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.029 [2024-07-12 17:35:26.609590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.029 qpair failed and we were unable to recover it. 00:27:08.029 [2024-07-12 17:35:26.609728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.609738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.609880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.609889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.609996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.610896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.610908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.611846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.611855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.612822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.612832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.613968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.613978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.614777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.614787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.615028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.615227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.615368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.030 [2024-07-12 17:35:26.615706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.030 qpair failed and we were unable to recover it. 00:27:08.030 [2024-07-12 17:35:26.615814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.615847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.615956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.615972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.616851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.616864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.617891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.617901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.618923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.618932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.619784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.619793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.620862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.620975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.621004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.621138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.621169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.621399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.621430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.621579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.621609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.621862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.621891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.622111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.622121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.622270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.622280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.031 qpair failed and we were unable to recover it. 00:27:08.031 [2024-07-12 17:35:26.622439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.031 [2024-07-12 17:35:26.622449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.622600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.622610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.622712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.622741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.622901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.622967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.623221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.623484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.623582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.623781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.623878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.623989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.624850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.624864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.625958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.625972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.626125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.626138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.626304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.626317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.626553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.626584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.626777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.626806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.627816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.627826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.628147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.628176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.628453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.628484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.628760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.628769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.628873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.628883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.629019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.629028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.629122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.629132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.629349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.032 [2024-07-12 17:35:26.629359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.032 qpair failed and we were unable to recover it. 00:27:08.032 [2024-07-12 17:35:26.629581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.629591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.629771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.629781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.629984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.630024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.630280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.630311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.630509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.630540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.630750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.630780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.630920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.630950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.631974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.631988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.632230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.632243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.632405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.632419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.632519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.632531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.632705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.632719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.632829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.632842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.633901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.633931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.634121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.634150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.634251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.634279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.634463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.634494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.634748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.634776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.634974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.635956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.033 [2024-07-12 17:35:26.635966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.033 qpair failed and we were unable to recover it. 00:27:08.033 [2024-07-12 17:35:26.636063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.636924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.636934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.637847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.637856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.638772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.638781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.639970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.639979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.640211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.640220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.640374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.640388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.640582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.640617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.640752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.640781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.640915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.640944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.641130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.641161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.641259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.641269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.641506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.641516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.641605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.641615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.034 qpair failed and we were unable to recover it. 00:27:08.034 [2024-07-12 17:35:26.641770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.034 [2024-07-12 17:35:26.641779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.641877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.641887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.642044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.642053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.642195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.642204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.642358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.642368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.642584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.642653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.642807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.642841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.643062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.643093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.643278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.643291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.643456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.643472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.643660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.643691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.643888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.643917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.644111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.644141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.644358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.644372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.644538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.644552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.644717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.644758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.644883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.644912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.645103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.645132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.645326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.645356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.645641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.645671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.645899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.645930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.646241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.646254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.646426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.646440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.646621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.646651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.646853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.646881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.647121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.647313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.647500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.647675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.647875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.647992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.648957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.648970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.649114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.649127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.649290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.649303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.649451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.649465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.649629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.035 [2024-07-12 17:35:26.649642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.035 qpair failed and we were unable to recover it. 00:27:08.035 [2024-07-12 17:35:26.649788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.649801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.649895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.649909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.650879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.650892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.651039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.651068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.651213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.651242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.651361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.651401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.651527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.651557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.651810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.651841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.652885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.652899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.653185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.653251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.653503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.653571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.653782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.653815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.654037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.654066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.654256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.654269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.654467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.654499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.654701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.654731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.654950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.654980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.655097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.655110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.655275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.655288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.655475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.655489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.655572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.655587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.655816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.655829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.656981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.656994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.657067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.657079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.036 qpair failed and we were unable to recover it. 00:27:08.036 [2024-07-12 17:35:26.657190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.036 [2024-07-12 17:35:26.657203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.657362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.657375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.657477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.657490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.657564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.657576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.657817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.657830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.657982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.657995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.658891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.658995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.659009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.659253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.659286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.659418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.659448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.659704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.659745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.659952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.659989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.660911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.660924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.661836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.661849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.662907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.662920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.663010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.663024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.663107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.663119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.663269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.663282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.037 qpair failed and we were unable to recover it. 00:27:08.037 [2024-07-12 17:35:26.663431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.037 [2024-07-12 17:35:26.663445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.663533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.663546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.663777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.663791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.663894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.663907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.664981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.664991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.665967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.665977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.666927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.666936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.667867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.667876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.668018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.668028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.668198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.668225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.668429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.668459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.668644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.668673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.038 [2024-07-12 17:35:26.668876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.038 [2024-07-12 17:35:26.668905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.038 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.669147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.669176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.669321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.669350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.669560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.669594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.669845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.669874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.670067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.670096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.670274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.670303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.670490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.670521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.670725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.670754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.671025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.671303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.671338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.671637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.671667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.671918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.671948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.672197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.672227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.672369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.672542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.672571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.672848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.672878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.673130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.673159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.673350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.673364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.673586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.673617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.673898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.673927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.674071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.674100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.674313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.674342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.674497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.674717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.674746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.675861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.675891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.676076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.676105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.676299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.676328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.676519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.676549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.676687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.676716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.676998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.677159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.677401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.677606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.677713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.677889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.677902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.039 qpair failed and we were unable to recover it. 00:27:08.039 [2024-07-12 17:35:26.678078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.039 [2024-07-12 17:35:26.678107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.678288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.678317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.678535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.678565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.678758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.678787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.678928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.678956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.679089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.679118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.679312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.679325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.679566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.679580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.679763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.679776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.679859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.679874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.680037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.680051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.680187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.680200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.680400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.680430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.680634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.680663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.680861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.680890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.681834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.681848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.682001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.682014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.682250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.682287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.682422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.682453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.682668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.682697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.682895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.682932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.683117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.683130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.683303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.683333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.683547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.683577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.683783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.683812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.684966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.684979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.685232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.685246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.685358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.040 [2024-07-12 17:35:26.685371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.040 qpair failed and we were unable to recover it. 00:27:08.040 [2024-07-12 17:35:26.685471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.685484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.685588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.685602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.685837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.685850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.685948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.685961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.686178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.686192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.686340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.686354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.686584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.686598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.686814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.686855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.687092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.687304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.687605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.687718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.687829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.687987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.688923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.688936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.689830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.689993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.690981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.691170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.691183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.691348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.691361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.691539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.691569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.691760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.691789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.691993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.692023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.692327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.692356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.041 [2024-07-12 17:35:26.692503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.041 [2024-07-12 17:35:26.692533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.041 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.692648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.692677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.692928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.692966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.693918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.693934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.694900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.694931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.695162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.695192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.695397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.695428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.695550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.695579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.695727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.695757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.695887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.695917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.696979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.696989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.697927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.697937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.698028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.698040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.042 qpair failed and we were unable to recover it. 00:27:08.042 [2024-07-12 17:35:26.698130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.042 [2024-07-12 17:35:26.698142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.698283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.698293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.698393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.698406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.698598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.698608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.698713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.698723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.698861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.698871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.699933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.699943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.700963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.700972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.701985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.701994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.702135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.702290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.702299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.702401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.702412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.702555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.702565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.702737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.702747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.043 [2024-07-12 17:35:26.703768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.043 [2024-07-12 17:35:26.703778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.043 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.703872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.703881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.703984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.703994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.704911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.704921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.705867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.705876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.706940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.706950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.707937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.707947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.708028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.708040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.708209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.708219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.708434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.708650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.708679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.708867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.708896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.044 [2024-07-12 17:35:26.709957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.044 [2024-07-12 17:35:26.709970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.044 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.710860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.710870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.711899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.711993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.712146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.712367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.712467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.712713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.712930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.712940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.713089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.713119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.713409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.713439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.713588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.713619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.713869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.713898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.714935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.714964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.715908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.715924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.716899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.716912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.717073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.717087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.045 [2024-07-12 17:35:26.717244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.045 [2024-07-12 17:35:26.717255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.045 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.717944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.717954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.718953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.718982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.719935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.719946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.720107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.720117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.720335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.720365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.720521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.720551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.720828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.720857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.721053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.721063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.721218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.721247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.721477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.721508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.721654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.721683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.721939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.721968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.722987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.722996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.723102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.723110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.723318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.723328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.723415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.723424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.046 [2024-07-12 17:35:26.723555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.046 [2024-07-12 17:35:26.723565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.046 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.723671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.723681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.723767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.723776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.723858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.723867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.724924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.724933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.725913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.725923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.726065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.726075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.726300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.726329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.726481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.726513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.726638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.726668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.726842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.726871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.727916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.727929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.728879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.728906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.729076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.729261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.729291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.729410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.729441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.729570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.047 [2024-07-12 17:35:26.729600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.047 qpair failed and we were unable to recover it. 00:27:08.047 [2024-07-12 17:35:26.729718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.729748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.729932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.729972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.730120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.730131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.730374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.730412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.730550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.730579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.730694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.730878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.730907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.731109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.731139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.731326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.731480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.731507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.731772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.731802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.732878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.732887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.733314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.733476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.733623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.733844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.733971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.734001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.734209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.734219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.734463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.734495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.734678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.734708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.734958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.734988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.735238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.735274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.735460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.735491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.735730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.735760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.735864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.735893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.736079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.736109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.736306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.736315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.736414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.736424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.736653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.736663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.736845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.736855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.737849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.737860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.738010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.738020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.738177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.738187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.738430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.048 [2024-07-12 17:35:26.738461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.048 qpair failed and we were unable to recover it. 00:27:08.048 [2024-07-12 17:35:26.738596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.738625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.738831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.738861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.739002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.739048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.739218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.739228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.739433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.739463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.739596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.739625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.739829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.739858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.740054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.740063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.740308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.740337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.740552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.740583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.740768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.740797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.740983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.741013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.741196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.741226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.741353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.741388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.741593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.741603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.741836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.741866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.742952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.742964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.743200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.743210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.743444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.743454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.743593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.743603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.743690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.743700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.743793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.743802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.744892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.744901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.745871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.745880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.049 [2024-07-12 17:35:26.746674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.049 [2024-07-12 17:35:26.746684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.049 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.746774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.746783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.746926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.747912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.747921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.748988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.748998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.749983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.749992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.750836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.750845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.050 [2024-07-12 17:35:26.751634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.050 [2024-07-12 17:35:26.751643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.050 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.751730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.751739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.751903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.751912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.751991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.752952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.752961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.753838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.753995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.754924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.754934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.755866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.755876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.756042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.756084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.756291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.756320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.756463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.756493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.756681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.756710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.756902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.756932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.757939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.757949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.758056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.758065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.758205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.758215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.758316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.758325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.051 qpair failed and we were unable to recover it. 00:27:08.051 [2024-07-12 17:35:26.758480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.051 [2024-07-12 17:35:26.758490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.758631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.758640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.758865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.758894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.759096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.759400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.759431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.759622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.759651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.759837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.759866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.760950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.760979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.761182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.761211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.761406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.761435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.761764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.761793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.761996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.762915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.762925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.763977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.763986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.764077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.764087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.764229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.764238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.764389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.764399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.764559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.764568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.764784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.764813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.765076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.765105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.765235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.765264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.765462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.765473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.765679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.765708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.765904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.765934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.766158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.766404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.766414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.766591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.766601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.766765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.766794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.766973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.767158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.767431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.767660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.767758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.767907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.767916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.768057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.768066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.768161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.052 [2024-07-12 17:35:26.768171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.052 qpair failed and we were unable to recover it. 00:27:08.052 [2024-07-12 17:35:26.768312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.768322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.768479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.768489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.768633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.768660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.768877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.768906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.769893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.769902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.770925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.770954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.771155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.771184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.771395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.771426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.771622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.771632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.771792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.771822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.772101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.772130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.772262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.772291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.772475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.772485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.772646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.772656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.772823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.772833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.773986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.773998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.774934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.774943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.775114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.775275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.775425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.775581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.053 [2024-07-12 17:35:26.775737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.053 qpair failed and we were unable to recover it. 00:27:08.053 [2024-07-12 17:35:26.775953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.054 [2024-07-12 17:35:26.775983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.054 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.776175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.776186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.776328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.776369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.776573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.776604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.776736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.776765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.776954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.776982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.777122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.777151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.777352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.777390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.777603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.777613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.777790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.777819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.778964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.778973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.339 qpair failed and we were unable to recover it. 00:27:08.339 [2024-07-12 17:35:26.779802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.339 [2024-07-12 17:35:26.779812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.779896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.779906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.780056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.780324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.780539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.780575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.780694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.780723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.780842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.780872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.781947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.781957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.782059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.782069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.782282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.782311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.782475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.782506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.782700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.782729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.782929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.782958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.783955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.783964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.784730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.784763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.340 [2024-07-12 17:35:26.785980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.340 [2024-07-12 17:35:26.785991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.340 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.786944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.786954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.787890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.787900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.788995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.789917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.789993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.790915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.790925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.341 [2024-07-12 17:35:26.791643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.341 qpair failed and we were unable to recover it. 00:27:08.341 [2024-07-12 17:35:26.791733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.791742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.791882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.791891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.791965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.791974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.792979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.792988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.793913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.793922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.794902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.794911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.795979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.795989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.796085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.796095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.796186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.796196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.796295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.796305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.342 [2024-07-12 17:35:26.796516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.342 [2024-07-12 17:35:26.796526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.342 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.796621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.796631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.796710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.796720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.796928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.796939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.797883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.797893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.798923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.798933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.799881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.799910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.800133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.800371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.800521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.800671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.800849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.800993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.343 [2024-07-12 17:35:26.801003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.343 qpair failed and we were unable to recover it. 00:27:08.343 [2024-07-12 17:35:26.801214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.801224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.801462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.801472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.801560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.801568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.801662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.801672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.801842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.801852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.801994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.802003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.802198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.802228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.802348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.802383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.802584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.802614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.802904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.802933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.803893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.803901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.804924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.804934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.805831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.805993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.806759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.806999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.807028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.807214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.807249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.344 [2024-07-12 17:35:26.807401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.344 [2024-07-12 17:35:26.807431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.344 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.807606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.807616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.807856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.807886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.808016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.808045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.808238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.808267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.808448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.808487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.808723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.808733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.808880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.808890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.809903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.809913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.810029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.810058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.810347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.810399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.810608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.810638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.810856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.810885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.811928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.811937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.812022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.812121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.812273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.812576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.812806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.812992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.813206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.813429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.813615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.813795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.813875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.813884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.345 [2024-07-12 17:35:26.814639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.345 qpair failed and we were unable to recover it. 00:27:08.345 [2024-07-12 17:35:26.814734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.814743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.814816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.814825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.814897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.814906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.815881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.815892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.816814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.816823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.817971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.817981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.818071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.818221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.818558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.818782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.818998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.819028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.819152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.819181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.819383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.819413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.819704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.819714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.819871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.819880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.820963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.820974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.821127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.821137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.346 [2024-07-12 17:35:26.821292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.346 [2024-07-12 17:35:26.821302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.346 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.821381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.821390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.821477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.821486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.821661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.821671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.821815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.821824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.821922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.821930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.822974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.822983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.823904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.823915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.824962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.824972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.825929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.825938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.826867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.826876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.827020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.827030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.347 qpair failed and we were unable to recover it. 00:27:08.347 [2024-07-12 17:35:26.827188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.347 [2024-07-12 17:35:26.827199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.827300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.827475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.827624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.827724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.827883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.827995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.828976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.828987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.829982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.829992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.830892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.830991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.831001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.831164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.831175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.831319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.831328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.831464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.831502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.348 qpair failed and we were unable to recover it. 00:27:08.348 [2024-07-12 17:35:26.831686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.348 [2024-07-12 17:35:26.831715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.831929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.831959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.832106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.832135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.832269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.832302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.832548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.832559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.832699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.832709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.832852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.832861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.833938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.833947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.834689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.834700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.835286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.835483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.835493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.835653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.835663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.835822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.835831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.835988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.835998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.836197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.836227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.836420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.836453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.836587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.836597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.836750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.836759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.836856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.836865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.837874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.837887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.838061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.838294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.838460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.838690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.349 [2024-07-12 17:35:26.838866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.349 qpair failed and we were unable to recover it. 00:27:08.349 [2024-07-12 17:35:26.838963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.838976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.839935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.839945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.840830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.840839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.841849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.841859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.842836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.842846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.843910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.843920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.350 [2024-07-12 17:35:26.844978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.350 [2024-07-12 17:35:26.844988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.350 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.845908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.845994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.846979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.846989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.847881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.847911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.848048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.848078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.848375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.848390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.848621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.848630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.848784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.848794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.849876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.849886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.850910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.850920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.851088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.851098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.851250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.351 [2024-07-12 17:35:26.851260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.351 qpair failed and we were unable to recover it. 00:27:08.351 [2024-07-12 17:35:26.851354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.851467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.851569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.851655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.851828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.851935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.851945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.852924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.852934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.853952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.853962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.854935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.854945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.855018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.855027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.855252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.855262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.855332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.855341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.855492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.855502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.352 qpair failed and we were unable to recover it. 00:27:08.352 [2024-07-12 17:35:26.855597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.352 [2024-07-12 17:35:26.855607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.855747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.855757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.855898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.855908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.856869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.856879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.857908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.857917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.858971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.858982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.859880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.859890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.860909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.860919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.861059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.861089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.861275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.353 [2024-07-12 17:35:26.861304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.353 qpair failed and we were unable to recover it. 00:27:08.353 [2024-07-12 17:35:26.861516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.861546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.861727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.861736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.861968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.861978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.862837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.862847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.863925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.863935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.864914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.864924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.865825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.865835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.866981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.354 [2024-07-12 17:35:26.866990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.354 qpair failed and we were unable to recover it. 00:27:08.354 [2024-07-12 17:35:26.867066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.867982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.867991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.868890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.868900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.869906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.869915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.870070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.870100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.870291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.870320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.870446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.870477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.870630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.870640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.870855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.870865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.871934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.871943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.872124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.872133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.872288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.872299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.872386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.872396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.872571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.872600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.872799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.872828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.873010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.873040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.873314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.873343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.355 [2024-07-12 17:35:26.873468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.355 [2024-07-12 17:35:26.873498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.355 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.873765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.873774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.874974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.874984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.875837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.875847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.876885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.876895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.877983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.877992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.878070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.878080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.878236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.878245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.878478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.356 [2024-07-12 17:35:26.878488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.356 qpair failed and we were unable to recover it. 00:27:08.356 [2024-07-12 17:35:26.878633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.878645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.878799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.878808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.878885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.878896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.878990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.879925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.879934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.880883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.880892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.881856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.881866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.882889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.882990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.883097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.883253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.883357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.883493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.357 qpair failed and we were unable to recover it. 00:27:08.357 [2024-07-12 17:35:26.883682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.357 [2024-07-12 17:35:26.883698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.883868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.883896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.884882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.884895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.885125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.885139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.885307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.885321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.885542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.885556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.885748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.885762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.886054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.886083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.886290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.886320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.886450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.886480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.886729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.886758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.886904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.886917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.887944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.887958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.888072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.888085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.888239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.888252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.888517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.888531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.888641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.888654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.888756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.888768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.889908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.889922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.358 [2024-07-12 17:35:26.890931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.358 [2024-07-12 17:35:26.890947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.358 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.891099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.891133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.891434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.891464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.891653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.891682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.891903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.891933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.892217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.892246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.892399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.892430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.892616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.892644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.892764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.892793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.893062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.893076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.893258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.893271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.893460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.893474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.893609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.893622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.893767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.893780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.894941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.894951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.895158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.895168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.895313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.895342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.895546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.895577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.895715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.895744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.895929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.895939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.896124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.896158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.896354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.896369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.896559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.896572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.896723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.896736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.896928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.896941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.897977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.897990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.898151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.898165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.898272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.898287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.359 qpair failed and we were unable to recover it. 00:27:08.359 [2024-07-12 17:35:26.898463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.359 [2024-07-12 17:35:26.898478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.898665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.898679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.898783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.898795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.898986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.899000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.899234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.899247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.899501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.899515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.899601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.899613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.899875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.899887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.900899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.900909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.901843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.901852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.902984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.902997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.903214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.903227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.903317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.903330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.903482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.903497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.903662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.903675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.903892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.903905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.904096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.904259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.904430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.904529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.360 [2024-07-12 17:35:26.904642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.360 qpair failed and we were unable to recover it. 00:27:08.360 [2024-07-12 17:35:26.904792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.904806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.905915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.905928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.906901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.906930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.907144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.907173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.907392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.907423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.907612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.907640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.907849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.907862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.907953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.907965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.908928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.908941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.909128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.909142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.909450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.909464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.909618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.909631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.909875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.909889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.909989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.910152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.910256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.910382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.910610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.361 qpair failed and we were unable to recover it. 00:27:08.361 [2024-07-12 17:35:26.910785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.361 [2024-07-12 17:35:26.910798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.910996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.911987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.911997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.912890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.912999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.913808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.913818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.914929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.914939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.915930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.915994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.916004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.362 qpair failed and we were unable to recover it. 00:27:08.362 [2024-07-12 17:35:26.916088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.362 [2024-07-12 17:35:26.916098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.916246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.916564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.916654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.916746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.916994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.917003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.917168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.917178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.917320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.917344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.917540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.917570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.917770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.917800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.918066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.918095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.918224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.918253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.918402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.918433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.918664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.918674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.918857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.918867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.919984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.919993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.920920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.920930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.921988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.921997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.922224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.922233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.922466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.922476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.922629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.922639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.922778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.363 [2024-07-12 17:35:26.922788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.363 qpair failed and we were unable to recover it. 00:27:08.363 [2024-07-12 17:35:26.922874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.922883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.923897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.923906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.924057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.924067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.924302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.924312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.924488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.924498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.924651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.924661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.924820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.924849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.925985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.925994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.926926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.926935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.927975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.927987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.928098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.928110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.928256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.928267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.928412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.928423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.928628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.928638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.364 qpair failed and we were unable to recover it. 00:27:08.364 [2024-07-12 17:35:26.928779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.364 [2024-07-12 17:35:26.928789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.928932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.928942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.929902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.929932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.930207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.930236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.930388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.930419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.930615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.930644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.930766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.930795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.930990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.931019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.931266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.931296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.931545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.931575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.931822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.931832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.931918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.931927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.932089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.932274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.932653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.932839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.932993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.933002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.933303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.933332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.933544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.933574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.933846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.933875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.934817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.934851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.935055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.935071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.935172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.935186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.365 [2024-07-12 17:35:26.935348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.365 [2024-07-12 17:35:26.935362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.365 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.935561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.935576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.935726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.935758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.935989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.936019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.936206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.936236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.936421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.936452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.936623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.936653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.936903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.936939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.937854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.937885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.938077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.938106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.938324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.938353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.938673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.938703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.938900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.938930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.939163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.939173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.939445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.939455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.939689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.939719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.939972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.940187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.940424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.940652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.940819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.940933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.940942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.941968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.941977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.366 qpair failed and we were unable to recover it. 00:27:08.366 [2024-07-12 17:35:26.942867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.366 [2024-07-12 17:35:26.942877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.942964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.942973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.943989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.943999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.944876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.944885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.945943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.945952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.946962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.946972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.947842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.947850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.948060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.367 [2024-07-12 17:35:26.948069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.367 qpair failed and we were unable to recover it. 00:27:08.367 [2024-07-12 17:35:26.948306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.948458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.948555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.948631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.948735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.948895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.948905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.949877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.949887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.950836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.950846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.951970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.951979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.952938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.952948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.953054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.953064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.953229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.953241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.953326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.953335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.953519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.368 [2024-07-12 17:35:26.953529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.368 qpair failed and we were unable to recover it. 00:27:08.368 [2024-07-12 17:35:26.953675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.953685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.953830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.953839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.953925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.953935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.954827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.954837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.955867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.955876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.956983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.956993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.957844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.957854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.958035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.958045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.958141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.369 [2024-07-12 17:35:26.958150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.369 qpair failed and we were unable to recover it. 00:27:08.369 [2024-07-12 17:35:26.958240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.958968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.958978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.959061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.959280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.959458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.959779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.959975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.960004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.960146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.960176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.960427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.960457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.960603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.960613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.960772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.960781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.961796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.961995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.962160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.962404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.962549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.962711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.962915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.962944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.963265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.963330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.963539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.963556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.963658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.963671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.963834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.964011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.964041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.964163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.964191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.964471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.964502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.964700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.964713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.964880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.964908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.965116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.370 [2024-07-12 17:35:26.965146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.370 qpair failed and we were unable to recover it. 00:27:08.370 [2024-07-12 17:35:26.965418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.965448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.965667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.965697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.965980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.966010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.966286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.966322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.966630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.966661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.966862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.966891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.967189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.967218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.967374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.967413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.967634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.967663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.967857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.967887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.968082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.968291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.968320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.968509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.968539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.968670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.968698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.968913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.968927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.969980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.969990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.970820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.970830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.971009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.971038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.971161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.971190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.971400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.971430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.971752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.971810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.971935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.971950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.972972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.972986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.973199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.973229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.973417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.371 [2024-07-12 17:35:26.973447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.371 qpair failed and we were unable to recover it. 00:27:08.371 [2024-07-12 17:35:26.973589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.973619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.973817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.973846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.974110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.974139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.974267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.974305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.974576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.974606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.974757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.974786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.974916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.974945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.975150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.975163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.975403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.975417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.975606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.975619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.975784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.975797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.975964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.975995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.976127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.976155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.976370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.976407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.976520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.976549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.976770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.976799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.977049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.977078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.977281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.977310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.977581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.977612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.977796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.977824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.977970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.977983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.978206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.978235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.978439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.978650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.978679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.978847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.979098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.979127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.979290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.979319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.979434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.979465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.979605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.979830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.979843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.980950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.980969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.981125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.981135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.981287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.981297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.981374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.981393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.981538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.372 [2024-07-12 17:35:26.981548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.372 qpair failed and we were unable to recover it. 00:27:08.372 [2024-07-12 17:35:26.981707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.981717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.981800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.981809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.981976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.981989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.982885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.982894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.983841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.983851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.984036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.984065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.984272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.984301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.984537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.984567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.984770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.984799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.985049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.985078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.985287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.985317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.985533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.985563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.985833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.985863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.986035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.986045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.986225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.986235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.986452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.986488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.986619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.986648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.986844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.986873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.987065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.987093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.987277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.987306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.987509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.987540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.987737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.987747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.987883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.988075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.988279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.988308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.988435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.988465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.373 [2024-07-12 17:35:26.988718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.373 [2024-07-12 17:35:26.988747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.373 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.989014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.989043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.989299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.989327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.989489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.989520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.989793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.989822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.990832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.990841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.991943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.991952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.992927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.992937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.374 [2024-07-12 17:35:26.993880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.374 [2024-07-12 17:35:26.993889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.374 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.993988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.993997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.994782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.994791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.995961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.995970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.996971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.996980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.997128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.997138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.997401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.997431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.997645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.997675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.997863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.997873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.998017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.998027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.998224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.998253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.998370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.998409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.998603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.998632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.998835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.998865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:26.999936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:26.999946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:27.000051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:27.000061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:27.000299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:27.000309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:27.000443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.375 [2024-07-12 17:35:27.000460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.375 qpair failed and we were unable to recover it. 00:27:08.375 [2024-07-12 17:35:27.000626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.000636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.000706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.000715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.000856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.000866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.001968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.001977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.002920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.002949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.003963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.003992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.004141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.004170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.004390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.004421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.004615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.004644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.004890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.004900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.005898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.005908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.006067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.006076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.006176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.006189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.006356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.006365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.006541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.006572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.006776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.006805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.007079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.007108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.007331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.007360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.007597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.376 [2024-07-12 17:35:27.007627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.376 qpair failed and we were unable to recover it. 00:27:08.376 [2024-07-12 17:35:27.007875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.007904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.008095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.008125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.008314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.008343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.008632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.008665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.008847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.008857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.008945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.008954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.009136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.009146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.009280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.009309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.009448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.009478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.009662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.009691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.009886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.009916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.010164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.010193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.010443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.010473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.010693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.010722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.010929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.010959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.011899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.011909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.012144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.012173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.012316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.012345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.012602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.012632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.012744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.012754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.012841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.012850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.013963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.013975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.014925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.014935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.377 [2024-07-12 17:35:27.015019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.377 [2024-07-12 17:35:27.015028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.377 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.015182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.015414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.015445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.015632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.015661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.015796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.015826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.015941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.015951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.016921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.016930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.017931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.017941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.018029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.018041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.018197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.018228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.018365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.018404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.018609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.018638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.018854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.018883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.019004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.019034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.019157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.019185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.019397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.019427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.019562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.019592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.019861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.019890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.020071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.020081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.020327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.020356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.020515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.020549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.020839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.378 [2024-07-12 17:35:27.020869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.378 qpair failed and we were unable to recover it. 00:27:08.378 [2024-07-12 17:35:27.021067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.021096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.021346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.021375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.021523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.021553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.021757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.021786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.021989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.021999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.022174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.022203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.022405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.022435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.022654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.022684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.022865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.022894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.023100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.023129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.023325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.023354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.023550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.023580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.023787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.023816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.024968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.024978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.025913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.025998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.026007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.026224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.026235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.026419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.026430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.026642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.026652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.026805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.026815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.027063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.027092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.027292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.027322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.027608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.027639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.027896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.027906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.379 [2024-07-12 17:35:27.028834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.379 [2024-07-12 17:35:27.028844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.379 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.028981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.028991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.029228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.029334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.029415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.029786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.029993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.030763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.030974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.031002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.031184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.031214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.031401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.031430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.031652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.031681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.031809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.031819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.032967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.032976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.033110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.033120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.033271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.033281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.033459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.033495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.033678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.033708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.033859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.033889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.034897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.034907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.035125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.035319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.035512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.035680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.380 [2024-07-12 17:35:27.035867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.380 qpair failed and we were unable to recover it. 00:27:08.380 [2024-07-12 17:35:27.035970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.035980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.036839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.036849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.037877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.037886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.038987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.038997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.039130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.039163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.039294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.039327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.039443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.039464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.039626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.039637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.039849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.039859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.040093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.040103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.040197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.040206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.040417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.040427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.040507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.040516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.040798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.040808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.381 qpair failed and we were unable to recover it. 00:27:08.381 [2024-07-12 17:35:27.041811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.381 [2024-07-12 17:35:27.041820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.041924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.041934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.042962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.042972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.043851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.043860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.044866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.044876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.045955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.045969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.046898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.046912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.047143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.047173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.382 qpair failed and we were unable to recover it. 00:27:08.382 [2024-07-12 17:35:27.047422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.382 [2024-07-12 17:35:27.047452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.047664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.047703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.047834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.047863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.048890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.048995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.049898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.049911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.050929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.050939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.051164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.051194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.051311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.051340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.051485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.051523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.051787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.051817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.052032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.052070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.052330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.383 [2024-07-12 17:35:27.052344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.383 qpair failed and we were unable to recover it. 00:27:08.383 [2024-07-12 17:35:27.052442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.052456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.052577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.052590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.052696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.052779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.052792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.052905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.052918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.053962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.053971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.054946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.054955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.055968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.055978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.056128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.056137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.056293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.056302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.056413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.056423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.056620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.056649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.056857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.056885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.057082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.057111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.057317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.384 [2024-07-12 17:35:27.057327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.384 qpair failed and we were unable to recover it. 00:27:08.384 [2024-07-12 17:35:27.057480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.057490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.057625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.057634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.057789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.057799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.057963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.057973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.058823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.058837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.059954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.059967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.060143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.060307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.060409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.060605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.060720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.060993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.061006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.061248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.061261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.061460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.061728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.061742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.061906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.061919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.062002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.062016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.062123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.062235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.062248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.062406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.062420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.385 [2024-07-12 17:35:27.062516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.385 [2024-07-12 17:35:27.062530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.385 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.062634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.062648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.062886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.062900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.063140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.063154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.063312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.063325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.063492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.063506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.063615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.063629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.063799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.063812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.064033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.064182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.064213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.064347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.064376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.064588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.064618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.064804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.064834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.065951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.065981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.066152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.066328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.066358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.066584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.066614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.066763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.066792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.066985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.067828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.067991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.068104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.068270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.068453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.068647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.386 [2024-07-12 17:35:27.068928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.386 [2024-07-12 17:35:27.068958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.386 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.069127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.069283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.069518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.069704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.069876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.069998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.070026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.070246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.070276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.070493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.070523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.070806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.070836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.071123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.071153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.071340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.071370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.071504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.071534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.071723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.071753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.072946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.072962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.073057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.073070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.073316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.073329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.073546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.073560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.073657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.073669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.073838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.073851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.074002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.074016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.074196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.074233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.074364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.074678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.074708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.074959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.387 [2024-07-12 17:35:27.074973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.387 qpair failed and we were unable to recover it. 00:27:08.387 [2024-07-12 17:35:27.075076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.075966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.075979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.076907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.076921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.077900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.077928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.078048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.078076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.078333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.078363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.078600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.078630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.078888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.388 [2024-07-12 17:35:27.078918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.388 qpair failed and we were unable to recover it. 00:27:08.388 [2024-07-12 17:35:27.079121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.079150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.079403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.079416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.079632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.079645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.079805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.079817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.080895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.080907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.081870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.081882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.082907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.082998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.083862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.389 [2024-07-12 17:35:27.083875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.389 qpair failed and we were unable to recover it. 00:27:08.389 [2024-07-12 17:35:27.084043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.084225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.084460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.084653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.084732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.084961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.084974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.085836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.085991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.086824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.086838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.087850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.087864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.088929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.088943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.089127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.390 [2024-07-12 17:35:27.089140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.390 qpair failed and we were unable to recover it. 00:27:08.390 [2024-07-12 17:35:27.089213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.089906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.089919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.391 [2024-07-12 17:35:27.090726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.391 [2024-07-12 17:35:27.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.391 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.090908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.090922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.091886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.091898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.092932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.092946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.093042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.666 [2024-07-12 17:35:27.093055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.666 qpair failed and we were unable to recover it. 00:27:08.666 [2024-07-12 17:35:27.093164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.093177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.093370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.093388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.093484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.093498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.093657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.093686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.093903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.093931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.094911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.094924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.095819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.095833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.096028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.096041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.096206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.096235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.096374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.096411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.096611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.096639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.096845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.096873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.097124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.097154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.097303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.097317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.097544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.097575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.097773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.097802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.098056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.098087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.098386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.098400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.098487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.098500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.098596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.098609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.098843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.098857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.099946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.099960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.100178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.100192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.667 qpair failed and we were unable to recover it. 00:27:08.667 [2024-07-12 17:35:27.100340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.667 [2024-07-12 17:35:27.100353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.100576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.100590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.100700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.100714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.100817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.100830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.100916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.100928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.101942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.101956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.102882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.102894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.103897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.103996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.104960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.104976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.105119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.105149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.105358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.105394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.105624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.105653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.105875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.105905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.106121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.106150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.106342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.106355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.106520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.668 [2024-07-12 17:35:27.106562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.668 qpair failed and we were unable to recover it. 00:27:08.668 [2024-07-12 17:35:27.106811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.106845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.106974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.107003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.107196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.107210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.107385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.107428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.107631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.107660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.107845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.107885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.108171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.108203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.108437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.108469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.108714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.108727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.108890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.108905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.109913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.109928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.110822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.110835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.111944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.111957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.112063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.112076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.112229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.112243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.112408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.112430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.112603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.112639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.112840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.112869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.669 qpair failed and we were unable to recover it. 00:27:08.669 [2024-07-12 17:35:27.113867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.669 [2024-07-12 17:35:27.113880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.113963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.113977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.114829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.114841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.115100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.115114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.115283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.115296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.115468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.115482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.115702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.115715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.115870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.115884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.116977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.116991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.117965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.117974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.118947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.670 [2024-07-12 17:35:27.118962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.670 qpair failed and we were unable to recover it. 00:27:08.670 [2024-07-12 17:35:27.119071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.119815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.119824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.120856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.120865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.121963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.121973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.122964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.122977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.123931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.123944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.124036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.124045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.124153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.124163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.124308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.124319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.124417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.124426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.671 [2024-07-12 17:35:27.124648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.671 [2024-07-12 17:35:27.124658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.671 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.124736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.124745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.124833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.124842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.124931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.124939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.125986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.125995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.126932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.126942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.127862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.127872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.128860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.128869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.129927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.672 [2024-07-12 17:35:27.129936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.672 qpair failed and we were unable to recover it. 00:27:08.672 [2024-07-12 17:35:27.130028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.130972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.130981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.131840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.131849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.132013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.132042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.132263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.132292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.132408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.132439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.132610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.132640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.132893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.132922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.133106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.133135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.133287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.133297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.133524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.133746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.133775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.133957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.133985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.134905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.134914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.135061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.135071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.135222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.135232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.135370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.135414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.135601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.135630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.135816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.135845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.136026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.136036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.136107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.136116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.136189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.136198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.136352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.136361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.673 [2024-07-12 17:35:27.136453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.673 [2024-07-12 17:35:27.136463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.673 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.136614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.136623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.136815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.136825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.136972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.136981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.137960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.137988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.138984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.139920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.139931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.140891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.140902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.141925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.141935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.142083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.142092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.142171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.142181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.674 [2024-07-12 17:35:27.142267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.674 [2024-07-12 17:35:27.142276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.674 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.142962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.142972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.143954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.143964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.144908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.144918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.145978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.145988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.675 qpair failed and we were unable to recover it. 00:27:08.675 [2024-07-12 17:35:27.146948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.675 [2024-07-12 17:35:27.146959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.147908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.147995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.148925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.148936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.149038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.149048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.149268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.149296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.149549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.149594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.149754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.149785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.150891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.150997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.151011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.151099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.151112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.151347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.151395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.151592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.151622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.151834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.151864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.152922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.152931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.153072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.153083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.153240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.153249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.153329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.676 [2024-07-12 17:35:27.153339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.676 qpair failed and we were unable to recover it. 00:27:08.676 [2024-07-12 17:35:27.153505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.153515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.153620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.153631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.153711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.153720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.153877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.153887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.154935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.154945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.155854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.155868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.156984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.156993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.157958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.157987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.158862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.158872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.159022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.159032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.677 [2024-07-12 17:35:27.159190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.677 [2024-07-12 17:35:27.159199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.677 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.159332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.159362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.159517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.159547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.159736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.159764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.159946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.159975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.160965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.160974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.161956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.161965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.162913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.162995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.163967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.163977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.164070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.164079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.164167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.678 [2024-07-12 17:35:27.164177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.678 qpair failed and we were unable to recover it. 00:27:08.678 [2024-07-12 17:35:27.164252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.164917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.164926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.165921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.165931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.166899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.167963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.167973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.679 [2024-07-12 17:35:27.168049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.679 [2024-07-12 17:35:27.168058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.679 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.168929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.168939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.169924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.169932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.170961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.170974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.171933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.171945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.172812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.172976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.173014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.173142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.173359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.173397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.173538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.173566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.680 qpair failed and we were unable to recover it. 00:27:08.680 [2024-07-12 17:35:27.173827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.680 [2024-07-12 17:35:27.173857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.174063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.174092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.174282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.174311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.174579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.174609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.174796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.174826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.175967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.175995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.176177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.176205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.176432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.176463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.176717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.176746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.177911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.177940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.178932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.179894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.681 [2024-07-12 17:35:27.180741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.681 qpair failed and we were unable to recover it. 00:27:08.681 [2024-07-12 17:35:27.180951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.180961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.181123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.181152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.181396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.181426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.181555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.181584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.181704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.181733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.181859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.181889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.182078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.182107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.182250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.182282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.182430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.182444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.182606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.182620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.182806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.182820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.183902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.183916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.184932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.184996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.185926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.185936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.682 qpair failed and we were unable to recover it. 00:27:08.682 [2024-07-12 17:35:27.186984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.682 [2024-07-12 17:35:27.186998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.187911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.187999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.188985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.188995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.189916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.189925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.190948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.190977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.191112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.191142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.191271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.191300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.191570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.683 [2024-07-12 17:35:27.191580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.683 qpair failed and we were unable to recover it. 00:27:08.683 [2024-07-12 17:35:27.191828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.191840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.191993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.192955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.192964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.193776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.193785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.194950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.194959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.195799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.195808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.196864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.196874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.197031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.197041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.197200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.197210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.684 qpair failed and we were unable to recover it. 00:27:08.684 [2024-07-12 17:35:27.197370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.684 [2024-07-12 17:35:27.197387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.197601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.197611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.197716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.197726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.197880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.197889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.197966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.197975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.198196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.198295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.198405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.198610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.198829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.198976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.199005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.199263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.199292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.199476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.199722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.199751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.199940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.199969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.200246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.200275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.200518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.200528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.200686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.200696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.200980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.201277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.201306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.201550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.201560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.201714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.201724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.201883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.201913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.202112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.202141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.202427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.202457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.202734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.202763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.202971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.203131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.203285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.203503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.203677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.203849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.203878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.204076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.204106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.204375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.204389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.204556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.204566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.204781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.204810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.205081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.205109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.205365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.205422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.205556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.205585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.205836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.205865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.206121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.685 [2024-07-12 17:35:27.206156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.685 qpair failed and we were unable to recover it. 00:27:08.685 [2024-07-12 17:35:27.206360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.206398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.206594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.206624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.206766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.206794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.206979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.207008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.207280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.207309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.207572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.207582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.207739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.207749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.207911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.207920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.208895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.208905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.209854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.210007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.210036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.210182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.210210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.210324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.210354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.210568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.210597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.210797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.210825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.211957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.211967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.212216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.212245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.212459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.212489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.212708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.212737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.212987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.213016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.213151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.213180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.213391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.213401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.213606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.213616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.213753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.213765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.686 [2024-07-12 17:35:27.214046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.686 [2024-07-12 17:35:27.214056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.686 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.214214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.214224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.214366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.214375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.214584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.214613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.214806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.214836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.215933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.215943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.216926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.216935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.217894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.217904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.218925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.218934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.219972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.219981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.220134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.220146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.220286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.220296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.687 [2024-07-12 17:35:27.220550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.687 [2024-07-12 17:35:27.220561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.687 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.220656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.220666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.220823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.220832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.220980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.220990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.221102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.221111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.221255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.221265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.221427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.221438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.221582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.221610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.221783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.221811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.222930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.222941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.223103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.223188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.223349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.223660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.223806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.223994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.224216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.224225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.224470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.224513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.224691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.224720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.224869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.224905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.225087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.225118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.225316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.225344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.225581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.225594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.225810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.225822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.225915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.225927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.226125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.226308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.226466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.226639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.688 [2024-07-12 17:35:27.226807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.688 qpair failed and we were unable to recover it. 00:27:08.688 [2024-07-12 17:35:27.226969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.226981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.227928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.227936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.228965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.228974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.229943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.229952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.230989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.231889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.689 [2024-07-12 17:35:27.231901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.689 qpair failed and we were unable to recover it. 00:27:08.689 [2024-07-12 17:35:27.232047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.232926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.232935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.233930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.233939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.234978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.234987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.235155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.235318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.235490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.235684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.235844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.235979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.236157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.236313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.236530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.236678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.236917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.236927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.237081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.237090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.237252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.690 [2024-07-12 17:35:27.237262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.690 qpair failed and we were unable to recover it. 00:27:08.690 [2024-07-12 17:35:27.237355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.237364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.237452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.237462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.237615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.237625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.237789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.237799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.237960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.237970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.238134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.238143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.238302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.238312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.238524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.238534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.238745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.238755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.238853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.238862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.239978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.239988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.240140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.240170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.240362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.240400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.240686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.240715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.240927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.240936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.241031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.241040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.241212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.241221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.241462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.241493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.241679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.241708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.241849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.241878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.242129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.242158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.242293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.242321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.242543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.242572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.242821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.242831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.242983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.242993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.243960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.691 [2024-07-12 17:35:27.243968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.691 qpair failed and we were unable to recover it. 00:27:08.691 [2024-07-12 17:35:27.244056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.244911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.244921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.245944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.245953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.246945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.246954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.247852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.247864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.248003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.248012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.692 qpair failed and we were unable to recover it. 00:27:08.692 [2024-07-12 17:35:27.248086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.692 [2024-07-12 17:35:27.248095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.248963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.248973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.249913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.249924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.250861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.250871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.251872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.251881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.693 [2024-07-12 17:35:27.252837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.693 qpair failed and we were unable to recover it. 00:27:08.693 [2024-07-12 17:35:27.252921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.252930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.253990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.254875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.254885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.255878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.255888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.256979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.256994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.257825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.694 [2024-07-12 17:35:27.257999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.694 [2024-07-12 17:35:27.258012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.694 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.258947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.258960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.259986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.259999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.260910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.260920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.261979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.261993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.262974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.263201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.263211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.263363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.263373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.263452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.695 [2024-07-12 17:35:27.263462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.695 qpair failed and we were unable to recover it. 00:27:08.695 [2024-07-12 17:35:27.263536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.263545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.263707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.263717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.263860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.263870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.264964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.264974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.265910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.265920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.266876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.266885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.267911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.267998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.268152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.268395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.268544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.268626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.696 [2024-07-12 17:35:27.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.696 [2024-07-12 17:35:27.268787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.696 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.268951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.268960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.269906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.269916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.270849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.270859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.271951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.271962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.272981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.272992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.697 [2024-07-12 17:35:27.273680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.697 [2024-07-12 17:35:27.273690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.697 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.273786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.273796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.273954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.273965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.274937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.274947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.275917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.275926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 31024 Killed "${NVMF_APP[@]}" "$@" 00:27:08.698 [2024-07-12 17:35:27.276560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.276987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.276997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:08.698 [2024-07-12 17:35:27.277156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.277242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.277350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:08.698 [2024-07-12 17:35:27.277448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.277545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.277656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.698 [2024-07-12 17:35:27.277806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.277913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.278004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.698 [2024-07-12 17:35:27.278014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.278103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.278113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.698 [2024-07-12 17:35:27.278263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.698 [2024-07-12 17:35:27.278274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.698 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.698 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.278908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.278917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.279908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.279916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.280846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.280856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.281869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.281878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.282782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.282994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.283004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.283095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.283104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.283192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.283202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.699 [2024-07-12 17:35:27.283382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.699 [2024-07-12 17:35:27.283392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.699 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.283599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.283609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.283686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.283697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.283849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.283859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.283952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.283962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=31762 00:27:08.700 [2024-07-12 17:35:27.284846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.284937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.284947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.285037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 31762 00:27:08.700 [2024-07-12 17:35:27.285140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.285241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:08.700 [2024-07-12 17:35:27.285338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.285445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 31762 ']' 00:27:08.700 [2024-07-12 17:35:27.285540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.285625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.285773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.700 [2024-07-12 17:35:27.285935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.285945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.700 [2024-07-12 17:35:27.286129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.700 [2024-07-12 17:35:27.286411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.700 [2024-07-12 17:35:27.286654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.286860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 17:35:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.700 [2024-07-12 17:35:27.286948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.286958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.287030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.287039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.287116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.287127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.287233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.287243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.287309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.700 [2024-07-12 17:35:27.287318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.700 qpair failed and we were unable to recover it. 00:27:08.700 [2024-07-12 17:35:27.287392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.287478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.287631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.287731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.287813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.287968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.287977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.288839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.288850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.289953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.289963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.290919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.290930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.291024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.291033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.701 [2024-07-12 17:35:27.291108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.701 [2024-07-12 17:35:27.291117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.701 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.291935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.291945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.292971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.292982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.293885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.293895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.294961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.294975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.295879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.295889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.296068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.296080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.296171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.296181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.296308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.296317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.702 qpair failed and we were unable to recover it. 00:27:08.702 [2024-07-12 17:35:27.296406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.702 [2024-07-12 17:35:27.296416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.296568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.296578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.296731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.296741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.296924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.296934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.296995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.297918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.297928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.298902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.298913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.299989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.299999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.300926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.300936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.301150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.703 [2024-07-12 17:35:27.301160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.703 qpair failed and we were unable to recover it. 00:27:08.703 [2024-07-12 17:35:27.301233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.301388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.301484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.301600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.301752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.301969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.301979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.302940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.302949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.303913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.303923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.304829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.304842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.305936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.305949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.306972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.306984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.704 qpair failed and we were unable to recover it. 00:27:08.704 [2024-07-12 17:35:27.307131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.704 [2024-07-12 17:35:27.307140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.307291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.307300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.307509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.307519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.307665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.307675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.307759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.307768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.307862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.307871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.308975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.308985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.309941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.309951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.310957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.310971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.311061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.311074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.311258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.311272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.311359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.311372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.311483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.705 [2024-07-12 17:35:27.311497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.705 qpair failed and we were unable to recover it. 00:27:08.705 [2024-07-12 17:35:27.311595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.311608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.311703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.311715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.311935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.311948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.312916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.312929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.313891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.313904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.314925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.314938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.315859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.315869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.316956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.316967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.317142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.706 [2024-07-12 17:35:27.317152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.706 qpair failed and we were unable to recover it. 00:27:08.706 [2024-07-12 17:35:27.317300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.317310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.317483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.317493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.317589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.317599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.317683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.317693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.317847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.317857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.318954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.318964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.319934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.319944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.320838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.320992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.321826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.707 qpair failed and we were unable to recover it. 00:27:08.707 [2024-07-12 17:35:27.321990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.707 [2024-07-12 17:35:27.322000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.322852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.322861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.323971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.323981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.324989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.324998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.325946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.325956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.708 [2024-07-12 17:35:27.326891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.708 qpair failed and we were unable to recover it. 00:27:08.708 [2024-07-12 17:35:27.326966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.326975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.327938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.327947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.328979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.328989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.329912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.329922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.330895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.330907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.331064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.331227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.331237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.709 qpair failed and we were unable to recover it. 00:27:08.709 [2024-07-12 17:35:27.331376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.709 [2024-07-12 17:35:27.331390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.331541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.331686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.331696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.331813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.331822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.331901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.331912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.331988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.331997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332668] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:27:08.710 [2024-07-12 17:35:27.332705] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.710 [2024-07-12 17:35:27.332816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.332968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.332977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.333849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.333859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.334013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.334022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.710 qpair failed and we were unable to recover it. 00:27:08.710 [2024-07-12 17:35:27.334166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.710 [2024-07-12 17:35:27.334176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.334928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.334938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.335863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.335872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.336946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.336956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.337847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.337856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.338011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.338113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.338316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.338416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.711 [2024-07-12 17:35:27.338502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.711 qpair failed and we were unable to recover it. 00:27:08.711 [2024-07-12 17:35:27.338575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.338584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.338732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.338742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.338825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.338835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.338913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.338922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.339969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.339978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.340927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.340937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.341829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.341839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.342957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.342966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.343040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.343050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.343135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.343145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.712 [2024-07-12 17:35:27.343244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.712 [2024-07-12 17:35:27.343254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.712 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.343934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.343943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.344914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.344924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.345947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.345958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.346922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.346932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.713 [2024-07-12 17:35:27.347712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.713 qpair failed and we were unable to recover it. 00:27:08.713 [2024-07-12 17:35:27.347922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.347932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.348909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.348994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.349943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.349953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.350956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.350967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.351047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.351205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.351215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.351304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.351313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.714 [2024-07-12 17:35:27.351455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.714 [2024-07-12 17:35:27.351465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.714 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.351544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.351554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.351634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.351643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.351721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.351732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.351804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.351814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.351890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.351899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.352899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.352911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.353947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.354981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.354991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.355179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.355188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.355276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.355286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.355441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.355450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.355549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.355559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.715 [2024-07-12 17:35:27.355632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.715 [2024-07-12 17:35:27.355641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.715 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.355721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.355730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.355878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.355887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.355987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.355996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.356901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.356910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.357956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.357966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.358911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.358999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.716 qpair failed and we were unable to recover it. 00:27:08.716 [2024-07-12 17:35:27.359906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.716 [2024-07-12 17:35:27.359917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.717 [2024-07-12 17:35:27.360090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.360963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.360973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.361922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.361931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.362882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.363944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.363954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.364031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.364040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.364142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.364153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.364233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.364243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.364315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.717 [2024-07-12 17:35:27.364325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.717 qpair failed and we were unable to recover it. 00:27:08.717 [2024-07-12 17:35:27.364410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.364502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.364591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.364750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.364843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.364926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.364935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.365918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.365928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.366985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.366995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.367962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.367971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.718 qpair failed and we were unable to recover it. 00:27:08.718 [2024-07-12 17:35:27.368531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.718 [2024-07-12 17:35:27.368541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.368610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.368620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.368785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.368795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.368934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.368943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.369882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.369891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.370936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.370946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.371853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.371864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.719 [2024-07-12 17:35:27.372701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.719 [2024-07-12 17:35:27.372710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.719 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.372837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.372846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.372943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.372953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.373857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.373867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.374916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.374925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.375891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.375900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.376916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.376925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.377015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.377025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.377185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.377195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.377278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.720 [2024-07-12 17:35:27.377288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.720 qpair failed and we were unable to recover it. 00:27:08.720 [2024-07-12 17:35:27.377440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.377451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.377542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.377551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.377693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.377703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.377863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.377947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.377956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.378965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.378975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.379918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.379927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.380894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.380903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.721 qpair failed and we were unable to recover it. 00:27:08.721 [2024-07-12 17:35:27.381747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.721 [2024-07-12 17:35:27.381758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.381843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.381853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.382979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.382988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.383062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.383161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.383250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.383410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.383497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.383997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.384931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.384941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.385967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.385978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.386930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.386940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.387108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.387193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.387279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.387351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.722 [2024-07-12 17:35:27.387511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.722 qpair failed and we were unable to recover it. 00:27:08.722 [2024-07-12 17:35:27.387591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.387601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.387700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.387709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.387784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.387794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.387876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.387886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.388956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.388966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.389927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.389938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.390957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.390967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.723 qpair failed and we were unable to recover it. 00:27:08.723 [2024-07-12 17:35:27.391040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.723 [2024-07-12 17:35:27.391050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.391929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.391939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.392970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.392980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.393925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.393934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.394959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.394969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.395044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.724 [2024-07-12 17:35:27.395053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.724 qpair failed and we were unable to recover it. 00:27:08.724 [2024-07-12 17:35:27.395132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.395895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.395904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.396942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.397898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.397908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.398922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.398932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.399024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.399033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.399114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.399124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.399194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.399203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.399295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.399305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.725 [2024-07-12 17:35:27.399401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.725 [2024-07-12 17:35:27.399411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.725 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.399926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.399935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.400936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.400946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.401978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.401987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.402898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.402991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.726 qpair failed and we were unable to recover it. 00:27:08.726 [2024-07-12 17:35:27.403969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.726 [2024-07-12 17:35:27.403978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.404946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.404956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.405967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.405977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.406901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.406991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.727 [2024-07-12 17:35:27.407459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.727 qpair failed and we were unable to recover it. 00:27:08.727 [2024-07-12 17:35:27.407704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.727 [2024-07-12 17:35:27.407718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.407792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.407802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.407878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.407887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.407964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.407974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.408921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.408931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.409955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.409965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.410950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.410960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.411950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.411959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.412045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.412054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.412147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.412157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.412239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.412248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.728 [2024-07-12 17:35:27.412343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.728 [2024-07-12 17:35:27.412355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.728 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.412953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.412963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.413923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.413934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.414922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.414931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.415972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.415982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.729 qpair failed and we were unable to recover it. 00:27:08.729 [2024-07-12 17:35:27.416728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.729 [2024-07-12 17:35:27.416737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.416812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.416821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.416981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.416991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.417857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.417999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.418974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.418984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.419890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.419899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.420906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.420916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.421056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.421068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.421140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.730 [2024-07-12 17:35:27.421149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.730 qpair failed and we were unable to recover it. 00:27:08.730 [2024-07-12 17:35:27.421236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.421931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.421940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.422975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.422985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.423937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.423947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.424930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.424940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.425016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.425026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.425113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.425122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.731 qpair failed and we were unable to recover it. 00:27:08.731 [2024-07-12 17:35:27.425216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.731 [2024-07-12 17:35:27.425227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.425376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.425390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.425553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.425562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.425725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.425734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.425823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.425832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.425919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.425928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.426972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.426981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.427947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.427959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.428127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.428137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.428220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.732 [2024-07-12 17:35:27.428230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:08.732 qpair failed and we were unable to recover it. 00:27:08.732 [2024-07-12 17:35:27.428312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.428937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.428949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.429910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.429997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.011 [2024-07-12 17:35:27.430007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.011 qpair failed and we were unable to recover it. 00:27:09.011 [2024-07-12 17:35:27.430091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.430969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.431966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.431976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.432949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.432959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.433946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.433957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.434029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.434038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.434181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.434191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.434332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.434342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.434458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.434468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.012 qpair failed and we were unable to recover it. 00:27:09.012 [2024-07-12 17:35:27.434553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.012 [2024-07-12 17:35:27.434562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.434660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.434670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.434810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.434820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.434903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.434913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.434999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.435914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.435924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.436834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.436843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.437945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.437955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.438922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.438931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.439013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.439023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.439099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.439109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.439185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.439194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.013 qpair failed and we were unable to recover it. 00:27:09.013 [2024-07-12 17:35:27.439281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.013 [2024-07-12 17:35:27.439290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.439499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.439510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.439576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.439585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.439738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.439747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.439832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.439841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.439936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.439945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.440962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.440972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.441971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.441982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.442932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.442942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.014 [2024-07-12 17:35:27.443726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.014 qpair failed and we were unable to recover it. 00:27:09.014 [2024-07-12 17:35:27.443801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.443811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.443952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.443962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.444934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.444943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.445945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.445955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.446931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.446941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.447041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.447051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.447125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.447135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.447209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.447219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.447363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.015 [2024-07-12 17:35:27.447374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.015 qpair failed and we were unable to recover it. 00:27:09.015 [2024-07-12 17:35:27.447547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.447557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.447703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.447714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.447886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.447898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.447986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.447996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.448898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.448908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.449987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.449997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.450966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.450977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.016 qpair failed and we were unable to recover it. 00:27:09.016 [2024-07-12 17:35:27.451879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.016 [2024-07-12 17:35:27.451890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.451968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.451978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.452930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.452940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.453977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.453986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.454921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.454935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.455852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.455862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.456022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.456113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.456198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.456355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.017 [2024-07-12 17:35:27.456439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.017 qpair failed and we were unable to recover it. 00:27:09.017 [2024-07-12 17:35:27.456522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.456532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.456620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.456630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.456794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.456804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.456877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.456888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.456975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.456984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.457931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.457942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.458918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.458929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.459890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.459900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.018 qpair failed and we were unable to recover it. 00:27:09.018 [2024-07-12 17:35:27.460912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.018 [2024-07-12 17:35:27.460921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.460995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.461878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.461888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.462926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.462936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.463977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.463986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.464974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.464984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.465058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.465068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.465172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.465182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.019 [2024-07-12 17:35:27.465263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.019 [2024-07-12 17:35:27.465273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.019 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.465347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.465358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.465434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.465444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.465613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.465624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.465715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.465724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.465889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.465900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.466943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.466953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.467969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.467981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.020 [2024-07-12 17:35:27.468707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.020 qpair failed and we were unable to recover it. 00:27:09.020 [2024-07-12 17:35:27.468783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.468792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.468934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.468944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.469929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.469939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.470894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.470903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.471982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.471992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.472912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.472997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.473088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.473259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.473346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.473436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.021 qpair failed and we were unable to recover it. 00:27:09.021 [2024-07-12 17:35:27.473595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.021 [2024-07-12 17:35:27.473606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.473765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.473775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.473851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.473861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.473936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.473945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.474943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.474953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.475982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.476974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.476984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.477864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.477873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.478023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.478033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.478093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.478102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.478178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.478187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.478329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.022 [2024-07-12 17:35:27.478340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.022 qpair failed and we were unable to recover it. 00:27:09.022 [2024-07-12 17:35:27.478414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.478979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.478990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.479947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.479956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.480920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.480930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.481942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.481954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.023 [2024-07-12 17:35:27.482841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.023 qpair failed and we were unable to recover it. 00:27:09.023 [2024-07-12 17:35:27.482924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.482934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.483921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.483931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.484940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.484949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.485922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.485932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.024 [2024-07-12 17:35:27.486709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.024 [2024-07-12 17:35:27.486717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.024 [2024-07-12 17:35:27.486723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.024 [2024-07-12 17:35:27.486728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.024 [2024-07-12 17:35:27.486727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.024 [2024-07-12 17:35:27.486891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.024 qpair failed and we were unable to recover it. 00:27:09.024 [2024-07-12 17:35:27.486839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:09.024 [2024-07-12 17:35:27.486970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.486979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.486946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:09.025 [2024-07-12 17:35:27.487053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.025 [2024-07-12 17:35:27.487070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.487053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:09.025 [2024-07-12 17:35:27.487229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.487409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.487560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.487716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.487879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.487889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.488985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.488995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.489921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.489932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.490852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.490862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.025 [2024-07-12 17:35:27.491618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.025 [2024-07-12 17:35:27.491628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.025 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.491699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.491709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.491852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.491863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.491938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.491948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.492892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.492902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.493945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.493955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.494963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.494972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.495963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.495973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.496130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.496141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.026 [2024-07-12 17:35:27.496221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.026 [2024-07-12 17:35:27.496232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.026 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.496969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.496979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.497978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.497988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.498993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.499942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.499952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.027 qpair failed and we were unable to recover it. 00:27:09.027 [2024-07-12 17:35:27.500787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.027 [2024-07-12 17:35:27.500798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.500880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.500890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.501926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.501936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.502942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.502952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.503958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.503972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.028 [2024-07-12 17:35:27.504581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.028 qpair failed and we were unable to recover it. 00:27:09.028 [2024-07-12 17:35:27.504741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.504754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.504857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.504868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.504962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.504976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.505958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.505968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.506822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.506832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.507979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.507990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.508937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.509088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.509099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.509183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.509193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.509284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.509295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.509438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.509448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.029 [2024-07-12 17:35:27.509526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.029 [2024-07-12 17:35:27.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.029 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.509629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.509640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.509819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.509830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.509911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.509921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.510915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.510927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.511961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.511971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.512796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.512806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.513924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.513934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.030 [2024-07-12 17:35:27.514741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.030 qpair failed and we were unable to recover it. 00:27:09.030 [2024-07-12 17:35:27.514846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.514858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.514959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.514970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.515975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.515985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.516927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.516936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.517874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.517884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.518982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.518992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.519940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.520100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.520111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.520188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.520198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.031 [2024-07-12 17:35:27.520340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.031 [2024-07-12 17:35:27.520350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.031 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.520497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.520508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.520621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.520632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.520727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.520738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.520880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.520890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.520980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.520990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.521897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.521991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.522899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.522991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.523845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.523854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.524970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.524980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.032 qpair failed and we were unable to recover it. 00:27:09.032 [2024-07-12 17:35:27.525680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.032 [2024-07-12 17:35:27.525690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.525830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.525840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.525983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.525993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.526981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.526993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.527905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.527916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.528896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.528907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.529900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.529991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.530002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.033 qpair failed and we were unable to recover it. 00:27:09.033 [2024-07-12 17:35:27.530069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.033 [2024-07-12 17:35:27.530079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.530936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.530947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.531913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.531924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.532907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.532997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.533914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.533924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.034 [2024-07-12 17:35:27.534679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.034 [2024-07-12 17:35:27.534689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.034 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.534830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.534840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.534922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.534932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.535933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.535943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.536975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.536985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.537926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.537936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.035 [2024-07-12 17:35:27.538898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.035 [2024-07-12 17:35:27.538908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.035 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.539943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.539953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.540900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.540910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.541906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.541916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.542874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.542884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.036 [2024-07-12 17:35:27.543845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.036 qpair failed and we were unable to recover it. 00:27:09.036 [2024-07-12 17:35:27.543920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.543930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.544914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.544924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.545900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.545910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.546932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.546941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.547921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.547934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.548073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.548082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.548177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.037 [2024-07-12 17:35:27.548186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.037 qpair failed and we were unable to recover it. 00:27:09.037 [2024-07-12 17:35:27.548331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.548432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.548513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.548665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.548817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.548974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.548984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.549961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.549970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.550966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.550976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.551979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.551989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.038 [2024-07-12 17:35:27.552807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.038 [2024-07-12 17:35:27.552817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.038 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.552902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.552912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.552986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.552995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.553947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.553957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.554959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.554969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.555877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.555887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.556852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.556862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.039 qpair failed and we were unable to recover it. 00:27:09.039 [2024-07-12 17:35:27.557743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.039 [2024-07-12 17:35:27.557753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.557841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.557851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.557944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.557954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.558989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.558998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.559946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.559957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.560928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.560941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.561933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.561946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.562017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.562030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.562190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.040 [2024-07-12 17:35:27.562204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.040 qpair failed and we were unable to recover it. 00:27:09.040 [2024-07-12 17:35:27.562288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.562390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.562557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.562672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.562774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.562949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.562962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.563939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.563953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.564958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.564971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.565975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.565985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.566064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.566073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.566171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.566181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.566278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.041 [2024-07-12 17:35:27.566288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.041 qpair failed and we were unable to recover it. 00:27:09.041 [2024-07-12 17:35:27.566437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.566540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.566693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.566787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.566894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.566986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.566996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.567851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.567861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.568936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.568945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.569893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.569903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.570921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.042 [2024-07-12 17:35:27.570933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.042 qpair failed and we were unable to recover it. 00:27:09.042 [2024-07-12 17:35:27.571010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.571856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.571865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.572900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.572910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.573903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.573999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.574935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.574948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.575055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.575068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.575161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.575174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.575249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.575263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.575349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.575365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.043 [2024-07-12 17:35:27.575468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.043 [2024-07-12 17:35:27.575482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.043 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.575565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.575578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.575662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.575674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.575758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.575772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.575865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.575878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.575958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.575972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.576950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.576963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.577843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.577996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.578976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.578986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.044 [2024-07-12 17:35:27.579814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.044 [2024-07-12 17:35:27.579824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.044 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.579898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.579908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.580951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.580961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.581947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.581957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.582851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.582861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.583929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.583939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.584022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.584032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.045 qpair failed and we were unable to recover it. 00:27:09.045 [2024-07-12 17:35:27.584175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.045 [2024-07-12 17:35:27.584185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.584910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.584920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.585917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.585927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.586889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.586899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.587939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.587949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.588098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.588108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.588248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.588258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.588480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.588490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.046 [2024-07-12 17:35:27.588585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.046 [2024-07-12 17:35:27.588595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.046 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.588688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.588698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.588772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.588782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.588929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.588938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.589963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.590778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.590787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.591931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.591940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.592963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.592973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.593127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.593136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.593227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.593237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.593323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.593333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.047 qpair failed and we were unable to recover it. 00:27:09.047 [2024-07-12 17:35:27.593510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.047 [2024-07-12 17:35:27.593521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.593598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.593608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.593682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.593692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.593789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.593798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.593874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.593884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.594971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.594980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.595952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.595961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.596860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.596870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.597878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.597887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.598027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.598036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.598112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.048 [2024-07-12 17:35:27.598121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.048 qpair failed and we were unable to recover it. 00:27:09.048 [2024-07-12 17:35:27.598202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.598977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.598987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.599887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.599897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.600953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.600963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.601930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.601940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.049 [2024-07-12 17:35:27.602757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.049 [2024-07-12 17:35:27.602766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.049 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.602923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.602932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.603932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.603941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.604983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.604992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.605843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.605853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.606901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.606911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.050 qpair failed and we were unable to recover it. 00:27:09.050 [2024-07-12 17:35:27.607102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.050 [2024-07-12 17:35:27.607113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.607200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.607210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.607351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.607361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.607524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.607534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.607699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.607709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.607852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.607861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.608947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.608957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.609893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.609903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.610934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.610944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.611958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.611967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.612050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.612060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.612137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.612147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.051 qpair failed and we were unable to recover it. 00:27:09.051 [2024-07-12 17:35:27.612220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.051 [2024-07-12 17:35:27.612229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.612373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.612388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.612477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.612487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.612584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.612594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.612769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.612779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.612862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.612872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.613889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.613992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.614921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.614931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.615935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.615944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.616964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.616974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.617047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.052 [2024-07-12 17:35:27.617057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.052 qpair failed and we were unable to recover it. 00:27:09.052 [2024-07-12 17:35:27.617147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.617929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.617939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.618912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.618990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.619954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.619963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.620985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.620995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.621095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.621105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.621188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.621198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.053 [2024-07-12 17:35:27.621275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.053 [2024-07-12 17:35:27.621284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.053 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.621425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.621435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.621519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.621529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.621670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.621679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.621750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.621760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.621851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.621861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.622959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.622969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.623921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.623934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.624887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.624900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.625044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.054 [2024-07-12 17:35:27.625058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.054 qpair failed and we were unable to recover it. 00:27:09.054 [2024-07-12 17:35:27.625154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.625914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.625927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.626905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.626918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.627838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.627998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.628091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.628179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.628329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.628417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.055 [2024-07-12 17:35:27.628521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.055 [2024-07-12 17:35:27.628531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.055 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.628625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.628636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.628734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.628744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.628818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.628828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.628924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.628934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.629843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.629989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.630985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.630994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.631960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.631970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.632873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.632886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.633035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.056 [2024-07-12 17:35:27.633049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.056 qpair failed and we were unable to recover it. 00:27:09.056 [2024-07-12 17:35:27.633132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.633963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.633975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.634890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.634903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.635890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.635904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.636958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.636971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.057 [2024-07-12 17:35:27.637895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.057 [2024-07-12 17:35:27.637905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.057 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.638964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.638973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.639989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.639999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.640925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.640997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.641961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.641970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.642120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.642131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.642271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.642281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.642427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.642437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.642517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.642527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.058 qpair failed and we were unable to recover it. 00:27:09.058 [2024-07-12 17:35:27.642602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.058 [2024-07-12 17:35:27.642612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.642768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.642778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.642937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.642947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.643817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.643995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.644005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.644078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.644088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.644183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.644193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.059 [2024-07-12 17:35:27.644270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.059 [2024-07-12 17:35:27.644280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.059 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.644967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.644977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.645938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.645949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.646053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.646142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.646235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.646325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.060 [2024-07-12 17:35:27.646424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.060 qpair failed and we were unable to recover it. 00:27:09.060 [2024-07-12 17:35:27.646508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.646518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.646601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.646611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.646694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.646705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.646917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.646927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.647958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.647968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.648870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.648880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.649841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.649997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.650846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.650855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.651003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.651013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.651106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.651117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.061 qpair failed and we were unable to recover it. 00:27:09.061 [2024-07-12 17:35:27.651345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.061 [2024-07-12 17:35:27.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.651514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.651525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.651669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.651681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.651784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.651794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.651850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.651860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.651945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.651956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.652978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.652987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.653906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.653998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.654933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.654943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.655893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.655903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.656064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.656074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.656161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.656171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.656270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.062 [2024-07-12 17:35:27.656280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.062 qpair failed and we were unable to recover it. 00:27:09.062 [2024-07-12 17:35:27.656436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.656446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.656604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.656697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.656707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.656790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.656799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.656901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.656910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.656993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.657860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.657869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.658949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.658960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.659950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.659960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.063 [2024-07-12 17:35:27.660713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.063 qpair failed and we were unable to recover it. 00:27:09.063 [2024-07-12 17:35:27.660789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.660799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.660891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.660903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.660976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.660987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.661985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.661995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.662900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.662910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.663971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.663981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.664066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.664076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.664150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.664159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.664220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.664230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.664310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.664320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.064 [2024-07-12 17:35:27.664403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.064 [2024-07-12 17:35:27.664413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.064 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.664492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.664502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.664667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.664677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.664827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.664837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.664982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.664991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.665909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.665919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.666898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.666908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.667921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.667998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.668813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.668823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.065 qpair failed and we were unable to recover it. 00:27:09.065 [2024-07-12 17:35:27.669002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.065 [2024-07-12 17:35:27.669012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.669910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.669920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.670968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.670978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.671884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.671894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.672954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.066 [2024-07-12 17:35:27.673711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.066 qpair failed and we were unable to recover it. 00:27:09.066 [2024-07-12 17:35:27.673810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.673820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.673899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.673909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.673987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.673996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.674928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.674938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.675928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.675937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.676941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.676951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.677970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.677980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.678061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.067 [2024-07-12 17:35:27.678154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.067 [2024-07-12 17:35:27.678164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.067 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.678851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.678997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.679919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.680931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.680942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.681951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.681965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.068 qpair failed and we were unable to recover it. 00:27:09.068 [2024-07-12 17:35:27.682715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.068 [2024-07-12 17:35:27.682728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.682834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.682848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.682933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.682947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.683956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.683970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.684982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.684996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.685864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.685878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.686899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.686990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.687004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.687087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.687100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.687191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.687205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.069 [2024-07-12 17:35:27.687288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.069 [2024-07-12 17:35:27.687301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.069 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.687462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.687554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.687653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.687812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.687905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.687988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.688933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.688947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.689942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.689956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.690849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.690859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.691850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.691860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.692005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.692017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.692091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.692101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.692182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.692191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.692277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.692287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.070 [2024-07-12 17:35:27.692419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.070 [2024-07-12 17:35:27.692436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.070 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.692517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.692527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.692599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.692609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.692693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.692703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.692846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.692856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.692938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.692948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.693908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.693918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.694952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.694961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.695882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.695892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.696925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.696935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.071 qpair failed and we were unable to recover it. 00:27:09.071 [2024-07-12 17:35:27.697564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.071 [2024-07-12 17:35:27.697573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.697680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.697690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.697831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.697841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.697929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.697939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.698953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.698963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.699883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.699892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.700922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.700931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.701989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.701999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.702148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.702158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.702302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.072 [2024-07-12 17:35:27.702312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.072 qpair failed and we were unable to recover it. 00:27:09.072 [2024-07-12 17:35:27.702411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.702421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.702578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.702588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.702810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.702820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.702894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.702903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.703879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.703888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.704967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.704977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.705951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.705961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.706959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.706969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.707237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.707246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.707407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.707417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.073 qpair failed and we were unable to recover it. 00:27:09.073 [2024-07-12 17:35:27.707483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.073 [2024-07-12 17:35:27.707494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.707636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.707646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.707733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.707742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.707814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.707824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.707915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.707924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.708951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.708961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.709796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.709806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.710926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.710936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.711981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.711991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.712941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.713033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.713225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.074 [2024-07-12 17:35:27.713236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.074 qpair failed and we were unable to recover it. 00:27:09.074 [2024-07-12 17:35:27.713325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.713335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.713478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.713489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.713633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.713643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.713812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.713822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.713896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.713906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.714975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.714985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.715884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.715894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.716835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.716845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.717907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.717917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.075 [2024-07-12 17:35:27.718792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.075 [2024-07-12 17:35:27.718802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.075 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.718944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.718955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.719987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.719996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.720921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.720931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.721922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.721932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.722978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.722988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.723862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.723872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.076 [2024-07-12 17:35:27.724976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.076 [2024-07-12 17:35:27.724985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.076 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.725194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.725204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.725358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.725367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.725601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.725611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.725762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.725772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.725850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.725860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.726933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.726943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.727983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.727993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.728785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.728795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.729881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.729890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.730059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.730069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.077 qpair failed and we were unable to recover it. 00:27:09.077 [2024-07-12 17:35:27.730178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.077 [2024-07-12 17:35:27.730187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.730325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.730334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.730424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.730435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.730510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.730522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.730673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.730683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.730756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.730766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.733626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.733638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.733888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.733898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.734896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.734906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.735875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.735884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.736932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.737977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.737986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.078 [2024-07-12 17:35:27.738868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.078 qpair failed and we were unable to recover it. 00:27:09.078 [2024-07-12 17:35:27.738942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.738951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.739996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.740864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.740873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.741929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.741940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.742886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.742896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.743062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.743072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.743279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.743315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.743568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.743603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.743851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.743866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.744048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.744062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.744218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.744231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.744314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.744328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.744577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.744593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.744769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.744783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.745045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.745058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.745177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.745348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.745361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.745517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.079 [2024-07-12 17:35:27.745532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.079 qpair failed and we were unable to recover it. 00:27:09.079 [2024-07-12 17:35:27.745774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.745788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.745873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.745890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.746058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.746072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.746220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.746234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.746460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.746474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.746718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.746731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.746902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.746915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.747899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.748958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.748971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.749912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.749924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.750154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a74000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.750354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.750549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.750663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.750905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.750992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.751921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.751931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.752086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.080 [2024-07-12 17:35:27.752096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.080 qpair failed and we were unable to recover it. 00:27:09.080 [2024-07-12 17:35:27.752327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.752424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.752600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.752766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.752866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.752951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.752960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.753981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.753991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.754912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.754993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.755936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.755946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.756949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.756959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.081 qpair failed and we were unable to recover it. 00:27:09.081 [2024-07-12 17:35:27.757063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.081 [2024-07-12 17:35:27.757073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.757933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.758929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.758939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.759958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.759967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.760121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.760131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.760212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.760223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.760432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.760443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.760592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.760601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.760825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.760835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.761788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.761798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.762852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.762861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.082 qpair failed and we were unable to recover it. 00:27:09.082 [2024-07-12 17:35:27.763001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.082 [2024-07-12 17:35:27.763011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.763173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.763183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.763414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.763424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.763577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.763587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.763734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.763744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.763988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.763998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.764809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.764818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.765952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.765962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.766941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.766950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.767852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.767871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.083 [2024-07-12 17:35:27.768819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.083 qpair failed and we were unable to recover it. 00:27:09.083 [2024-07-12 17:35:27.768898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.084 [2024-07-12 17:35:27.768907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.084 qpair failed and we were unable to recover it. 00:27:09.084 [2024-07-12 17:35:27.769048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.084 [2024-07-12 17:35:27.769058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.084 qpair failed and we were unable to recover it. 00:27:09.084 [2024-07-12 17:35:27.769133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.084 [2024-07-12 17:35:27.769143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.084 qpair failed and we were unable to recover it. 00:27:09.084 [2024-07-12 17:35:27.769361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.084 [2024-07-12 17:35:27.769371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.084 qpair failed and we were unable to recover it. 00:27:09.084 [2024-07-12 17:35:27.769524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.084 [2024-07-12 17:35:27.769534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.084 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.769701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.769712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.769819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.769829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.770003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.770013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.770159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.770169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.770350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.770359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.770457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.366 [2024-07-12 17:35:27.770467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.366 qpair failed and we were unable to recover it. 00:27:09.366 [2024-07-12 17:35:27.770631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.770641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.770731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.770741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.770826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.770836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.770953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.770962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.771962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.771972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.772913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.772923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.773891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.773990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.774220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.774440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.774634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.774742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.774847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.774856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.367 [2024-07-12 17:35:27.775729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.367 qpair failed and we were unable to recover it. 00:27:09.367 [2024-07-12 17:35:27.775822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.775832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.775975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.775985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.776914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.776924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.777903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.777998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.778917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.778926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.779905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.779914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.780907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.780917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.781015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.781025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.368 qpair failed and we were unable to recover it. 00:27:09.368 [2024-07-12 17:35:27.781123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.368 [2024-07-12 17:35:27.781133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.781982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.781992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.782847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.782856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.783907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.783993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.784924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.784934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.785906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.785916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.786078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.786089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.786231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.786240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.786392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.369 [2024-07-12 17:35:27.786402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.369 qpair failed and we were unable to recover it. 00:27:09.369 [2024-07-12 17:35:27.786544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.786555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.786694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.786704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.786803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.786814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.786891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.786900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.786978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.786988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.787854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.787864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.788946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.788955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.789981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.789992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.790224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.790234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.790489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.790500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.790731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.790742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.790892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.790902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.791850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.791860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.792971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.792982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.370 [2024-07-12 17:35:27.793159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.370 [2024-07-12 17:35:27.793170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.370 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.793259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.793269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.793416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.793427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.793572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.793582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.793731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.793742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.793973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.793983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.794942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.794952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.795962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.795972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.796817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.796827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.797871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.797882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.798051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.798062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.798149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.371 [2024-07-12 17:35:27.798160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.371 qpair failed and we were unable to recover it. 00:27:09.371 [2024-07-12 17:35:27.798286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.798450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.798614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.798713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.798823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.798891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.798901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.799853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.799993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.800907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.800917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.801943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.801956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.802133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.802143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.802374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.802387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.802551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.802561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.802771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.802782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.803916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.803927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.804050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.804060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.804264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.372 [2024-07-12 17:35:27.804274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.372 qpair failed and we were unable to recover it. 00:27:09.372 [2024-07-12 17:35:27.804443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.804454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.804619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.804629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.804803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.804813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.804942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.804952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.805987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.805997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.806833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.806990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.807842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.807853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.808908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.808918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.809953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.809964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.810053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.810063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.810162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.810172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.373 [2024-07-12 17:35:27.810330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.373 [2024-07-12 17:35:27.810340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.373 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.810945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.810956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.811967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.811977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.812119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.812129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.812285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.812295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.812458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.812469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.812676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.812687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.812830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.812840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.813936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.813947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.814873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.814883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.815886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.815896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.816039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.816050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.816191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.374 [2024-07-12 17:35:27.816201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.374 qpair failed and we were unable to recover it. 00:27:09.374 [2024-07-12 17:35:27.816417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.816427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.816638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.816648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.816737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.816747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.816955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.816966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.817988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.817998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.818885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.818895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.819986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.819995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.820940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.820950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.821036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.821045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.821212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.821222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.821331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.821341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.375 [2024-07-12 17:35:27.821484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.375 [2024-07-12 17:35:27.821494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.375 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.821585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.821595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.821829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.821839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.822904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.822993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.823964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.823974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.824893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.824903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.825908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.825918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.826963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.826974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.827052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.827061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.827144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.827153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.827239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.827249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.827390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.376 [2024-07-12 17:35:27.827400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.376 qpair failed and we were unable to recover it. 00:27:09.376 [2024-07-12 17:35:27.827547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.827557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.827649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.827659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.827759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.827769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.827855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.827864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.828914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.828924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.829986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.829995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.830932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.831980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.831990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.377 [2024-07-12 17:35:27.832907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.377 qpair failed and we were unable to recover it. 00:27:09.377 [2024-07-12 17:35:27.832989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.832999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.833975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.833986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.834924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.834934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.835927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.835937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.836955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.836964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.837195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.837205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.837452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.837462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.837633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.837643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.837733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.837743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.837890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.837900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.838940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.838950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.378 [2024-07-12 17:35:27.839200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.378 [2024-07-12 17:35:27.839210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.378 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.839431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.839441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.839614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.839624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.839775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.839785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.840980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.840990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.841062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.841072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.841291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.841301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.841457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.841468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.841669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.841679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.841898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.841907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.842211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.842221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.842361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.842371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.842548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.842560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.842803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.842812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.842955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.842964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.843189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.843200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.843411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.843421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.843506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.843516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.843725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.843735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.843941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.843951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.844927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.379 [2024-07-12 17:35:27.844938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.379 qpair failed and we were unable to recover it. 00:27:09.379 [2024-07-12 17:35:27.845082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.845183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.845372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.845530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.845631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.845787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.845797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.846978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.846987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.847269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.847279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.847445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.847455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.847535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.847545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.847698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.847709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.847945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.847955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.848068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.848077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.848222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.848232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.848404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.848413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.848737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.848748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.848903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.848913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.849171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.849181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.849408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.849418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.849650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.849660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.849820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.849830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.850918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.850928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.851092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.851102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.851326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.851335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.851504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.851514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.851680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.851690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.851897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.851907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.852049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.852058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.852285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.852295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.852399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.852409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.380 [2024-07-12 17:35:27.852647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.380 [2024-07-12 17:35:27.852657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.380 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.852816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.852825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.852975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.852985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.853135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.853144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.853218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.853228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.853456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.853467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.853726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.853735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.853880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.853890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.854049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.854060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.854290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.854300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.854394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.854404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.854660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.854670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.854843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.854853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.855055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.855300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.855482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.855720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.855904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.855991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.856001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.856231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.856241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.856498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.856509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.856777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.856787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.857009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.857019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.857268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.857278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.857498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.857509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.857752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.857761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.858020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.858032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.858171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.858180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.858427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.858437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.858639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.858648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.858790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.858800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.859005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.859015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.859222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.859231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.859460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.859470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.859700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.859710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.859906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.859916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.860143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.860153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.860333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.860494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.860504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.860647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.860656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.860887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.381 [2024-07-12 17:35:27.860897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.381 qpair failed and we were unable to recover it. 00:27:09.381 [2024-07-12 17:35:27.861049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.861059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.861175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.861185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.861361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.861371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.861573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.861583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.861821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.861831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.862828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.862838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.863058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.863067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.863207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.863217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.863366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.863376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.863604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.863613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.863848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.863858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.864943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.864953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.865159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.865169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.865291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.865301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.865569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.865580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.865812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.865825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.866048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.866059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.866321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.866331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.866473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.866482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.866689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.866700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.866859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.866869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.867945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.867955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.868169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.868179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.868461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.868471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.868640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.868650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.868856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.382 [2024-07-12 17:35:27.868866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.382 qpair failed and we were unable to recover it. 00:27:09.382 [2024-07-12 17:35:27.869102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.869112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.869344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.869354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.869551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.869561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.869715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.869725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.869895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.869905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.870123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.870133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.870231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.870241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.870401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.870411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.870567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.870576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.870747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.870757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.871012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.871022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.871181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.871190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.871422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.871432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.871611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.871621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.871781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.871790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.872909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.872918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.873079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.873090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.873245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.873255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.873336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.873346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.873503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.873516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.873796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.873806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.874018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.874028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.874202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.874212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.874436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.874446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.874688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.874698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.874867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.874877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.875101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.875111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.875266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.875276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.875488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.875499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.875657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.875667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.875803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.875813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.876023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.876033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.876173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.876183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.876363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.876372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.876539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.383 [2024-07-12 17:35:27.876549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.383 qpair failed and we were unable to recover it. 00:27:09.383 [2024-07-12 17:35:27.876806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.876816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.877877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.877888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.878978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.878988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.879975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.879985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.880196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.880206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.880442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.880452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.880656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.880666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.880899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.880909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.881890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.881899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.882040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.882050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.882277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.882287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.882532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.882542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.882750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.882760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.882914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.882923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.384 qpair failed and we were unable to recover it. 00:27:09.384 [2024-07-12 17:35:27.883083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.384 [2024-07-12 17:35:27.883093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.883323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.883332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.883542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.883552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.883689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.883699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.883863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.883874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.883951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.883961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.884970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.884980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.885144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.885155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.885375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.885388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.885483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.885493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.885652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.885662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.885823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.885833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.886079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.886088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.886265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.886274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.886508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.886518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.886607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.886617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.886837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.886847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.887934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.887944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.888029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.888038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.888215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.888225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.888402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.888414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.888639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.888649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.888812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.888822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.889028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.889038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.889189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.889199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.889410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.889420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.889651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.889661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.889951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.889961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.890190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.890200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.890436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.890446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.890626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.385 [2024-07-12 17:35:27.890635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.385 qpair failed and we were unable to recover it. 00:27:09.385 [2024-07-12 17:35:27.890863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.890873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.891019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.891029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.891208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.891218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.891454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.891464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.891702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.891712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.891928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.891938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.892937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.892946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.893041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.893051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.893282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.893291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.893445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.893455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.893686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.893696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.893920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.893930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.894158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.894167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.894398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.894409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.894663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.894672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.894784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.894793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.894938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.894948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.895204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.895213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.895425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.895435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.895598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.895607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.895865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.895875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.896081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.896091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.896321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.896330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.896591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.896602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.896754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.896766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.897893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.897902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.898055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.898065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.898297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.898306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.898479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.898489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.898718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.898728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.386 [2024-07-12 17:35:27.898947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.386 [2024-07-12 17:35:27.898957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.386 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.899165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.899175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.899402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.899412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.899585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.899595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.899752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.899761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.899844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.899855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.900895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.900905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.901085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.901095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.901304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.901314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.901503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.901513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.901678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.901688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.901875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.901907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.902144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.902159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.902417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.902432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.902541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.902555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.902721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.902734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.902949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.902963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.903197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.903498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.903513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.903777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.903790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.904058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.904071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.904314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.904328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.904479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.904493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.904732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.904745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.904981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.904998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.905183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.905196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.905438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.905452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.905617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.905630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.905847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.905861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.906069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.906083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.906254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.906267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.906509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.906523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.906693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.906707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.906897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.906910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.907022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.907036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.907187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.907200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.387 qpair failed and we were unable to recover it. 00:27:09.387 [2024-07-12 17:35:27.907348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.387 [2024-07-12 17:35:27.907361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.907664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.907680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.907845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.907858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.908885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.908899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.909136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.909151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.909343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.909356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.909545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.909559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.909736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.909749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.909981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.909994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.910275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.910289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.910533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.910545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.910786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.910796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.911014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.911258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.911420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.911593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.911748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.911991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.912001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.912175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.912185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.912412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.912423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.912692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.912702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.912861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.912871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.913021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.913031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.913244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.913255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.913435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.913445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.913632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.913642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.913744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.913753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.914007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.914017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.914268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.914278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.914434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.914444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.388 qpair failed and we were unable to recover it. 00:27:09.388 [2024-07-12 17:35:27.914599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.388 [2024-07-12 17:35:27.914609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.914819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.914829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.914931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.914941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.915959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.915969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.916127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.916137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.916408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.916418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.916509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.916519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.916774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.916783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.916952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.916962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.917124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.917133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.917295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.917304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.917514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.917524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.917676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.917686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.917955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.917965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.918170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.918180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.918369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.918388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.918656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.918670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.918849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.918862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.919127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.919141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.919380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.919394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.919582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.919595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.919828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.919842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.920035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.920049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.920199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.920213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.920455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.920469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.920707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.920721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.920972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.920985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.921179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.921192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.921428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.921442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.921603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.921616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.921819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.921832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.922069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.922082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.922305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.922318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.922499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.922513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.922703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.922717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.922875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.389 [2024-07-12 17:35:27.922888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.389 qpair failed and we were unable to recover it. 00:27:09.389 [2024-07-12 17:35:27.923038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.923052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.923286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.923299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.923518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.923532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.923690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.923704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.923927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.923941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.924209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.924223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.924458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.924472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.924704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.924718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.924913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.924927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.925145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.925158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.925375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.925391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.925607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.925620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.925787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.925801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.926017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.926030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.926196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.926209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.926355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.926368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.926523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.926537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.926780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.926794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.927032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.927310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.927326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.927548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.927563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.927727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.927740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.927987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.928000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.928241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.928255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.928498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.928512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.928677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.928690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.928876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.928889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.929061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.929074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.929265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.929278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.929513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.929527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.929679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.929693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.929942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.929955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.930125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.930139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.930361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.930374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.930619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.930632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.930820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.930833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.931079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.931310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.931323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.931514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.931528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.931685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.931699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.390 qpair failed and we were unable to recover it. 00:27:09.390 [2024-07-12 17:35:27.931955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.390 [2024-07-12 17:35:27.931969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.932181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.932195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.932376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.932394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.932577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.932590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.932740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.932754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.932920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.932934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.933037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.933050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.933200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.933213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.933389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.933403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.933643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.933656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.933846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.933860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.934044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.934058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.934225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.934239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.934397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.934410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.934640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.934653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.934772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.934786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.935026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.935040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.935207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.935220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.935396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.935410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.935624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.935640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.935801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.935815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.936052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.936065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.936305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.936318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.936559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.936572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.936739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.936753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.936901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.936914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.937126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.937139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.937358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.937371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.937538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.937552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.937717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.937731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.937894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.937907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.938159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.938172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.938337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.938351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.938531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.938545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.938763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.938776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.939874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.391 [2024-07-12 17:35:27.939887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.391 qpair failed and we were unable to recover it. 00:27:09.391 [2024-07-12 17:35:27.940053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.940174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.940369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.940623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.940741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.940923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.940936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.941197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.941210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.941400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.941413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.941559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.941573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.941746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.941759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.941954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.941967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.942145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.942158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.942251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.942264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.942431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.942444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.942722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.942736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.942927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.942941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.943198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.943211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.943491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.943505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.943765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.943781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.943944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.943957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.944170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.944184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.944423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.944437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.944664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.944678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.944943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.944957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.945205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.945218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.945438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.945452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.945619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.945632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.945870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.945883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.945979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.945993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.946106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.946120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.946301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.946315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.946551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.946565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.946781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.946795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.947022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.947036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.392 qpair failed and we were unable to recover it. 00:27:09.392 [2024-07-12 17:35:27.947294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.392 [2024-07-12 17:35:27.947307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.947497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.947511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.947679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.947692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.947931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.947945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.948159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.948172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.948340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.948353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.948608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.948622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.948836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.948849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.948930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.948944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.949183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.949197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.949460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.949474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.949651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.949665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.949823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.949837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.950007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.950020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.950262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.950275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.950517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.950530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.950746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.950759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.951022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.951193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.951206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.951452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.951466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.951729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.951743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.951958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.951972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.952186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.952199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.952441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.952454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.952688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.952703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.952936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.952946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.953184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.953194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.953413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.953423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.953635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.953645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.953874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.953884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.954115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.954124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.954227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.954236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.954468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.954478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.954635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.954645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.954887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.954897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.955042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.955052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.955273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.955282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.955498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.955508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.955717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.955727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.956025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.956035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.393 [2024-07-12 17:35:27.956263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.393 [2024-07-12 17:35:27.956272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.393 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.956423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.956434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.956532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.956542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.956692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.956702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.956909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.956918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.957144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.957154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.957318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.957328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.957550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.957560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.957793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.957803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.957988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.957998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.958203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.958385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.958396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.958628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.958638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.958856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.958865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.959974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.959984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.960203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.960213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.960385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.960395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.960635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.960647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.960744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.960754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.960849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.960860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.961069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.961079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.961233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.961242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.961402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.961412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.961618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.961628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.961836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.961845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.962087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.962097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.962240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.962250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.962428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.962439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.962674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.962685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.962857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.962868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.963913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.963923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.394 qpair failed and we were unable to recover it. 00:27:09.394 [2024-07-12 17:35:27.964102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.394 [2024-07-12 17:35:27.964112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.964261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.964272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.964502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.964514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.964661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.964671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.964879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.964889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.965034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.965044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.965265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.965275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.965453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.965463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.965691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.965701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.965921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.965932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.966091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.966101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.966332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.966342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.966582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.966592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.966809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.966820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.967050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.967061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.967295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.967305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.967484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.967495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.967733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.967743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.967842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.967852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.968111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.968122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.968297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.968307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.968463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.968474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.968651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.968661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.968846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.968858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.969024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.969207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.969451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.969690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.969851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.969999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.970008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.970214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.970224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.970384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.970394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.970605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.970615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.970782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.970791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.971046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.971056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.971237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.971247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.971533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.971544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.971792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.971802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.971911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.971921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.972130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.972139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.972370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.395 [2024-07-12 17:35:27.972390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.395 qpair failed and we were unable to recover it. 00:27:09.395 [2024-07-12 17:35:27.972673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.972683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.972838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.972848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.973951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.973961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.974963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.974973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.975209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.975220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.975381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.975391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.975590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.975600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.975781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.975791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.975882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.975892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.976196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.976206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.976381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.976392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.976567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.976578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.976789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.976800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.977018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.977194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.977388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.977510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.977760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.977993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.978170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.978346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.978583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.978817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.978937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.978946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.979123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.979133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.979280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.979290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.979429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.979439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.979662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.979673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.979884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.979895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.396 [2024-07-12 17:35:27.980051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.396 [2024-07-12 17:35:27.980061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.396 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.980219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.980229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.980325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.980337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.980600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.980611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.980764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.980774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.980865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.980874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.981906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.981916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.982885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.982895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.983061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.983071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.983337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.983347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.983584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.983594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.983802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.983811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.983978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.983990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.984222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.984232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.984494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.984504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.984685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.984695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.984798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.984808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.985927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.985938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.986087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.986096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.986202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.986212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.397 [2024-07-12 17:35:27.986367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.397 [2024-07-12 17:35:27.986379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.397 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.986586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.986597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.986759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.986768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.986908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.986918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.987963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.987973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.988214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.988223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.988391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.988402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.988654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.988664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.988847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.988857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.989938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.989948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.990979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.990989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.991801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.991814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.992937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.992951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.993118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.993132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.993300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.398 qpair failed and we were unable to recover it. 00:27:09.398 [2024-07-12 17:35:27.993436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.398 [2024-07-12 17:35:27.993451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.993610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.993626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.993782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.993796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.993894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.993908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.994971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.994985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.995150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.995394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.995409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.995506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.995519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24deed0 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.995681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.995694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.995839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.995849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.996957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.996967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.997984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.997994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.998871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.998881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.999037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.999144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.999257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.999470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.399 [2024-07-12 17:35:27.999586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.399 qpair failed and we were unable to recover it. 00:27:09.399 [2024-07-12 17:35:27.999797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:27.999808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:27.999966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:27.999976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.000978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.000988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.001896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.001906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.002756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.002766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.003903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.003912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.004877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.004886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.005041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.005051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.005196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.005206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.005290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.005299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.005527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.005539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.400 [2024-07-12 17:35:28.005704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.400 [2024-07-12 17:35:28.005714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.400 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.005869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.005879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.006972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.006982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.007903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.007913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.008981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.008990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.009943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.009953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.010039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.010049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.010150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.010159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.010243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.010253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.010393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.010403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.401 [2024-07-12 17:35:28.010557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.401 [2024-07-12 17:35:28.010567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.401 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.010639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.010649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.010789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.010799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.010956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.010967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.011857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.011867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.012903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.012912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.013910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.013921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.014945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.014955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.015960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.015970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.016060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.016070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.016219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.016229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.016308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.402 [2024-07-12 17:35:28.016318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.402 qpair failed and we were unable to recover it. 00:27:09.402 [2024-07-12 17:35:28.016417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.016427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.016517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.016526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.016682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.016692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.016847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.017983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.017992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.018831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.018842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.019825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.019835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.020974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.020984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.021912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.021921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.403 [2024-07-12 17:35:28.022099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.403 [2024-07-12 17:35:28.022109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.403 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.022820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.022830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.023977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.023987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.024901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.024911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.025775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.025784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.026975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.026985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.027230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.027240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.027399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.027409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.027512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.027767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.027777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.027931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.027941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.404 qpair failed and we were unable to recover it. 00:27:09.404 [2024-07-12 17:35:28.028779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.404 [2024-07-12 17:35:28.028788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.028852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.028861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.029844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.029997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.030978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.030988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.031966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.031976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.032118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.032128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.032304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.032551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.032562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.032657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.032667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.032883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.032893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.033067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.033076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.033221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.033231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.033398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.033409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.033574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.405 [2024-07-12 17:35:28.033584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.405 qpair failed and we were unable to recover it. 00:27:09.405 [2024-07-12 17:35:28.033662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.033671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.033759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.033769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.033950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.033959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.034988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.034998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.035961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.035971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.036895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.036904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.037987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.037997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.038920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.038930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.039086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.039096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.039263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.039274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.406 [2024-07-12 17:35:28.039433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.406 [2024-07-12 17:35:28.039443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.406 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.039601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.039612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.039701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.039711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.039921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.039933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.040935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.040945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.041923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.041933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.042822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.042832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.043978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.043988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.044837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.044995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.045005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.045104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.045115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.045270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.407 [2024-07-12 17:35:28.045280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.407 qpair failed and we were unable to recover it. 00:27:09.407 [2024-07-12 17:35:28.045526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.045536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.045628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.045639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.045714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.045725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.045879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.046903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.046913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.047846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.047856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.048978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.048988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.049954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.049963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.050955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.050964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.051108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.051118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.051217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.408 [2024-07-12 17:35:28.051226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.408 qpair failed and we were unable to recover it. 00:27:09.408 [2024-07-12 17:35:28.051315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.051480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.051590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.051689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.051875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.051979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.051990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.052904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.052915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.053904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.053914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.054849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.054860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.055885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.055895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.409 qpair failed and we were unable to recover it. 00:27:09.409 [2024-07-12 17:35:28.056124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.409 [2024-07-12 17:35:28.056133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.056947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.056959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.057979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.057988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.058860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.058869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.059081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.059092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.410 [2024-07-12 17:35:28.059266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.410 [2024-07-12 17:35:28.059276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.410 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.059432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.059442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.059608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.059618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.059722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.059733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.059823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.059833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.060932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.060942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.061956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.061966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.062965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.062974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.063891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.063900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.064052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.064270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.064279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.064366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.064379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.064556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.411 [2024-07-12 17:35:28.064566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.411 qpair failed and we were unable to recover it. 00:27:09.411 [2024-07-12 17:35:28.064806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.064816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.064887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.064897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.064971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.064981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.065949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.065959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.066946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.066956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.067888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.067898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.068134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.068144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.068299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.068309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.068494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.068504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.068592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.068602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.068835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.068845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.069968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.069977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.070118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.070129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.070267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.070277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.412 [2024-07-12 17:35:28.070365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.412 [2024-07-12 17:35:28.070375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.412 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.070522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.070533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.070739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.070749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.070823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.070833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.071891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.071901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.072916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.072926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.073924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.073934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.074090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.074100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.074309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.074319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.074411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.074421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.074606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.074621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.074876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.074886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.075897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.075907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.076014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.076024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.076165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.076175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.076337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.413 [2024-07-12 17:35:28.076348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.413 qpair failed and we were unable to recover it. 00:27:09.413 [2024-07-12 17:35:28.076444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.076455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.076617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.076628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.076715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.076726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.076825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.076835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.076926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.076936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.077833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.077990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.078833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.078843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.079922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.079932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.080902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.080911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.081062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.081071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.081142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.414 [2024-07-12 17:35:28.081152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.414 qpair failed and we were unable to recover it. 00:27:09.414 [2024-07-12 17:35:28.081232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.081319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.081514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.081631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.081848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.081939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.081949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.082931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.082942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.083902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.083912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.084953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.084963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.085929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.085939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.086032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.086042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.086136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.086230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.086240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.086326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.086336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.415 [2024-07-12 17:35:28.086434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.415 [2024-07-12 17:35:28.086444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.415 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.086657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.086668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.086844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.086854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.086947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.086958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.087979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.087990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.088950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.088961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.089949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.089959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.090894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.090903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.091937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.092024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.092033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.092174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.416 [2024-07-12 17:35:28.092184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.416 qpair failed and we were unable to recover it. 00:27:09.416 [2024-07-12 17:35:28.092394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.092501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.092664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.092764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.092868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.092963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.092973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.093980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.093992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.094854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.094866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.095942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.095953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.096935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.096946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.097916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.097926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.098084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.098094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.098170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.417 [2024-07-12 17:35:28.098180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.417 qpair failed and we were unable to recover it. 00:27:09.417 [2024-07-12 17:35:28.098396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.098406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.098502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.098512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.098600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.098610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.098848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.098858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.099979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.099990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.100849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.100860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.101875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.101886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.102923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.102934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.418 [2024-07-12 17:35:28.103624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.418 [2024-07-12 17:35:28.103633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.418 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.103722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.103733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.103805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.103816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.103976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.103986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.104844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.104992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.105852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.105863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.106802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.106812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.107931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.107941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.108885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.109107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.109117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.419 [2024-07-12 17:35:28.109283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.419 [2024-07-12 17:35:28.109298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.419 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.109457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.109468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.109582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.109592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.109775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.109785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.109926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.109936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.110888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.110898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.111131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.111141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.111291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.111301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.111460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.111470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.111624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.111634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.111858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.111869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.112771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.112999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.113179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.113400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.113547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.113711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.113937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.113947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.114972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.114983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.115200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.115210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.115471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.115482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.115633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.115643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.115826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.115836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.115989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.115999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.116147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.116158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.116308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.420 [2024-07-12 17:35:28.116318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.420 qpair failed and we were unable to recover it. 00:27:09.420 [2024-07-12 17:35:28.116460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.116471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.116642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.116653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.116746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.116757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.116956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.116967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.117841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.117852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.118920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.118930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.119864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.119874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.120880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.120897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.121058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.121069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.121210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.121220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.121312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.121322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.421 [2024-07-12 17:35:28.121528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.421 [2024-07-12 17:35:28.121539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.421 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.121747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.121758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.121868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.121879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.122856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.122999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.123948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.123958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.124958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.124968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.125040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.692 [2024-07-12 17:35:28.125050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.692 qpair failed and we were unable to recover it. 00:27:09.692 [2024-07-12 17:35:28.125159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.125170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.125316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.125545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.125555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.125718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.125728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.125815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.125825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.126988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.126998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.127924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.127934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.128961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.128972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.129981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.129992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.130159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.130269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.130425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.130576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.693 [2024-07-12 17:35:28.130805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.693 qpair failed and we were unable to recover it. 00:27:09.693 [2024-07-12 17:35:28.130964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.130974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.131976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.131986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.132150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.132160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.132323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.132333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.132500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.132786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.132796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.132885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.133952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.133962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.134964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.134974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.135181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.135191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.135351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.135361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.135578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.135588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.135748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.135758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.135925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.135935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.694 [2024-07-12 17:35:28.136960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.694 [2024-07-12 17:35:28.136970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.694 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.137896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.137999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.138233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.138243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.138489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.138499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.138682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.138692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.138778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.138788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.139929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.139939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.140953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.140962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.141122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.141277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.141661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.141829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.141993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.142150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.142241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.142430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.142647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.142830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.142840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.143110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.143120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.143339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.143349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.695 qpair failed and we were unable to recover it. 00:27:09.695 [2024-07-12 17:35:28.143580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.695 [2024-07-12 17:35:28.143590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.143733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.143742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.143843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.143852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.144916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.144926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.145092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.145104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.145245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.145255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.145482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.145493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.145704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.145713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.145918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.145928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.146136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.146146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.146343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.146352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.146499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.146510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.146763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.146773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.147929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.147939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.148146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.148156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.148314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.148323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.148474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.148485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.148643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.148653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.148808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.148818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.149099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.149109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.149320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.149329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.696 [2024-07-12 17:35:28.149563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.696 [2024-07-12 17:35:28.149573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.696 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.149712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.149722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.149955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.149965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.150128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.150138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.150383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.150393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.150582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.150604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.150768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.150783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.151958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.151972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.152185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.152199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.152484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.152498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.152664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.152677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.152927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.153114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.153127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.153306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.153327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.153497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.153511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.153724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.153737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.153971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.153984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.154191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.154204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.154477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.154730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.154743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.154955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.154968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.155233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.155246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.155486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.155500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.155689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.155703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.155940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.155954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.156147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.156160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.156388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.156402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.156621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.156873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.156886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.157053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.157067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.157226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.157240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.157426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.157440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.157635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.697 [2024-07-12 17:35:28.157650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.697 qpair failed and we were unable to recover it. 00:27:09.697 [2024-07-12 17:35:28.157826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.157840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.158034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.158048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.158259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.158273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.158382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.158396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.158589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.158603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.158830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.158844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.159063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.159077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.159335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.159588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.159599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.159741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.159751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.159957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.159967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.160211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.160221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.160385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.160396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.160648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.160658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.160811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.160820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.161811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.161821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.162057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.698 [2024-07-12 17:35:28.162224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:09.698 [2024-07-12 17:35:28.162449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.162566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.162727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.698 [2024-07-12 17:35:28.162955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.162966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.698 [2024-07-12 17:35:28.163069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.163175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.698 [2024-07-12 17:35:28.163362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.163596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.163787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.163906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.163917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.164123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.164133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.164340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.164351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.164447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.698 [2024-07-12 17:35:28.164458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.698 qpair failed and we were unable to recover it. 00:27:09.698 [2024-07-12 17:35:28.164535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.164545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.164637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.164647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.164726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.164736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.164841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.164851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.165939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.165949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.166933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.166945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.167949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.167959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.168922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.168932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.169019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.169029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.169130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.169140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.699 [2024-07-12 17:35:28.169295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.699 [2024-07-12 17:35:28.169306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.699 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.169418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.169428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.169570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.169580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.169732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.169742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.169820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.169830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.169918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.169927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.170941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.170953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.171839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.171850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.172114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.172125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.172356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.172366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.172533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.172544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.172758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.172768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.172860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.172869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.173944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.173955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.700 [2024-07-12 17:35:28.174944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.700 [2024-07-12 17:35:28.174955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.700 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.175915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.175924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.176848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.176859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.177938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.177948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.178859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.178869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.701 qpair failed and we were unable to recover it. 00:27:09.701 [2024-07-12 17:35:28.179843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.701 [2024-07-12 17:35:28.179853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.180890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.180993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.181900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.181995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.182875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.182886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.183879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.183888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.702 [2024-07-12 17:35:28.184705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.702 qpair failed and we were unable to recover it. 00:27:09.702 [2024-07-12 17:35:28.184795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.184805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.184966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.184976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.185970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.185980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.186847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.186857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.187895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.187905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.188831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.188993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.189173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.189395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.189511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.189701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.189914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.189924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.190075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.190086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.190318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.190586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.190596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.190757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.190766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.190925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.190935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.703 [2024-07-12 17:35:28.191107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.703 [2024-07-12 17:35:28.191118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.703 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.191858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.191998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.192262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.192361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.192536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.192707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.192808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.193810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.193821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.194073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.194320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.194504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.194625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.194789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.194990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.195947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.195956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.196964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.196974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.197095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.197104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.197322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.197331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 [2024-07-12 17:35:28.197541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.704 [2024-07-12 17:35:28.197553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.704 qpair failed and we were unable to recover it. 00:27:09.704 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.704 [2024-07-12 17:35:28.197667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.197678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.197855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.197865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:09.705 [2024-07-12 17:35:28.197971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.197983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.198167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.198177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.705 [2024-07-12 17:35:28.198425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.198437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.705 [2024-07-12 17:35:28.198594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.198605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.198814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.198824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.198921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.198930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.199944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.199953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.200158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.200168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.200344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.200354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.200518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.200530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.200714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.200724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.200873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.200882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.201921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.201930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.202193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.202443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.202595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.202696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.202913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.202996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.203005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.203227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.203236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.203400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.203410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.203659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.203831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.203841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.705 [2024-07-12 17:35:28.204024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.705 [2024-07-12 17:35:28.204033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.705 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.204216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.204226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.204443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.204454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.204554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.204564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.204733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.204742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.204903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.204912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.205909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.205919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.206102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.206112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.206320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.206330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.206493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.206506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.206615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.206625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.206784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.206794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.207067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.207077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.207219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.207229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.207473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.207484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.207695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.207705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.207846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.207856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.208853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.208864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.209946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.209956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.210107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.210117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.210296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.706 [2024-07-12 17:35:28.210306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.706 qpair failed and we were unable to recover it. 00:27:09.706 [2024-07-12 17:35:28.210547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.210558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.210749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.210759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.210935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.210945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.211126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.211136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.211357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.211368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.211523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.211534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.211685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.211695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.211836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.211846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.212974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.212985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.213939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.213950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.214116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.214267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.214277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.214485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.214497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.214753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.214764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.214993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.215005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.215243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.215255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.215522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.215534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.215679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.215690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.215803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.215814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.216944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.216954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.217033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.217043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.217251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.217262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.217470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.217481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.217696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.217706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.217858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.217867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.707 qpair failed and we were unable to recover it. 00:27:09.707 [2024-07-12 17:35:28.218074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.707 [2024-07-12 17:35:28.218084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.218226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.218236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.218476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.218487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.218770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.218789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 Malloc0 00:27:09.708 [2024-07-12 17:35:28.219009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.219183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.219346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.708 [2024-07-12 17:35:28.219639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.219825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:09.708 [2024-07-12 17:35:28.219983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.219994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.220197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.220207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.708 [2024-07-12 17:35:28.220360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.220371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.220525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.220535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.708 [2024-07-12 17:35:28.220701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.220711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.221733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.221760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.221943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.221954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.222139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.222149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.222309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.222320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.222536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.222547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.222597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.708 [2024-07-12 17:35:28.222724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.222734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.223961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.223972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.224063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.224073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.224302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.224312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.224520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.224530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.224704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.224713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.224895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.224905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.225084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.225094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.225338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.225347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.225509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.225520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.225705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.225715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.708 qpair failed and we were unable to recover it. 00:27:09.708 [2024-07-12 17:35:28.225884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.708 [2024-07-12 17:35:28.225894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.226055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.226065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.226229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.226238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.226495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.226506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.226744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.226754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.226918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.226927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.227137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.227147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.227299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.227308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.227468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.227478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.227710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.227719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.227862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.227873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.228152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.228162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.228370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.228383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.228557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.228568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.228721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.228730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.228897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.228908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.229059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.229068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.229303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.229312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.229501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.229742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.229751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.229961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.229972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.230124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.230136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.230375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.230389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.230631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.230646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.230811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.230821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.230930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.230940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.231044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.709 [2024-07-12 17:35:28.231286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.231486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:09.709 [2024-07-12 17:35:28.231596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.231814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.709 [2024-07-12 17:35:28.231968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.231979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.709 [2024-07-12 17:35:28.232236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.232247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.233138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.233161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.233427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.233439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.233652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.233662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.233843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.233853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.234093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.709 [2024-07-12 17:35:28.234103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.709 qpair failed and we were unable to recover it. 00:27:09.709 [2024-07-12 17:35:28.234307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.234317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.234549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.234559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.234712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.234721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.234955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.234965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.235126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.235340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.235439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.235662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.235836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.235991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.236909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.236919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.237124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.237134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.237391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.237402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.237656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.237666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.237768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.237778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.237926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.237936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.238191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.238200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.238449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.238460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.238604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.238618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.238824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.238834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.239070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.239080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.710 [2024-07-12 17:35:28.239255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.239266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.239427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.239438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 [2024-07-12 17:35:28.239544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.239555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:09.710 [2024-07-12 17:35:28.239692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.710 [2024-07-12 17:35:28.239703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.710 qpair failed and we were unable to recover it. 00:27:09.710 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.711 [2024-07-12 17:35:28.239938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.239948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.711 [2024-07-12 17:35:28.240186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.240197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.240900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.240921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.241143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.241157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.241390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.241401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.241647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.241657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.241865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.241875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.242083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.242093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.242233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.242243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.242479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.242490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.242669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.242679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.242867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.242876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.243111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.243121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.243284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.243294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.243471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.243482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.243714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.243724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.243895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.243905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.244931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.244941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.245157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.245167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.245397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.245407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.245615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.245624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.245851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.245861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.246932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.246942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.247180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.247191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.711 [2024-07-12 17:35:28.247425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.247436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 [2024-07-12 17:35:28.247617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.247627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.711 [2024-07-12 17:35:28.247840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.711 [2024-07-12 17:35:28.248082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.711 [2024-07-12 17:35:28.248093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.711 qpair failed and we were unable to recover it. 00:27:09.711 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.711 [2024-07-12 17:35:28.248236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.248247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.248359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.248368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a7c000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.248494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.248529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.248641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.248656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.248871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.248885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.249147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.249160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.249414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.249428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.249664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.249677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.249826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.249840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.250025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.250038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.250268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.250282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.250538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.250552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.250709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.712 [2024-07-12 17:35:28.250722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4a84000b90 with addr=10.0.0.2, port=4420 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.250802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.712 [2024-07-12 17:35:28.253175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.253309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.253331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.253342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.253351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.253376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.712 [2024-07-12 17:35:28.263087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.263154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.263170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.263178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.263184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.263199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.712 17:35:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 31226 00:27:09.712 [2024-07-12 17:35:28.273108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.273170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.273186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.273193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.273199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.273214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.283066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.283131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.283146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.283153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.283158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.283173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.293151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.293251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.293267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.293274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.293280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.293297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.303125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.303184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.303200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.303206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.303212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.303227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.313172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.313229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.313244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.313251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.313257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.313271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.323168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.712 [2024-07-12 17:35:28.323229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.712 [2024-07-12 17:35:28.323244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.712 [2024-07-12 17:35:28.323250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.712 [2024-07-12 17:35:28.323256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.712 [2024-07-12 17:35:28.323272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.712 qpair failed and we were unable to recover it. 00:27:09.712 [2024-07-12 17:35:28.333166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.333240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.333255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.333262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.333268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.333282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.343218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.343273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.343292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.343299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.343305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.343320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.353309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.353371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.353390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.353396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.353402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.353417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.363323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.363392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.363407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.363415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.363420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.363436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.373339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.373402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.373417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.373424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.373430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.373445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.383380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.383436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.383450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.383457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.383465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.383480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.393435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.393495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.393510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.393516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.393522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.393536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.403411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.403471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.403486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.403493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.403499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.403513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.413443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.413504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.413518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.413524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.413530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.413545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.423476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.423572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.423586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.423592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.423598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.423612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.433514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.433576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.433591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.433597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.433603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.433617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.443517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.443575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.443589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.443595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.443601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.443615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.713 [2024-07-12 17:35:28.453569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.713 [2024-07-12 17:35:28.453630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.713 [2024-07-12 17:35:28.453645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.713 [2024-07-12 17:35:28.453651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.713 [2024-07-12 17:35:28.453657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.713 [2024-07-12 17:35:28.453671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.713 qpair failed and we were unable to recover it. 00:27:09.974 [2024-07-12 17:35:28.463595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.974 [2024-07-12 17:35:28.463653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.974 [2024-07-12 17:35:28.463667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.974 [2024-07-12 17:35:28.463674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.974 [2024-07-12 17:35:28.463680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.974 [2024-07-12 17:35:28.463695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.974 qpair failed and we were unable to recover it. 00:27:09.974 [2024-07-12 17:35:28.473686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.974 [2024-07-12 17:35:28.473771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.974 [2024-07-12 17:35:28.473786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.974 [2024-07-12 17:35:28.473796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.974 [2024-07-12 17:35:28.473802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.974 [2024-07-12 17:35:28.473816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.974 qpair failed and we were unable to recover it. 00:27:09.974 [2024-07-12 17:35:28.483650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.974 [2024-07-12 17:35:28.483708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.974 [2024-07-12 17:35:28.483722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.974 [2024-07-12 17:35:28.483729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.974 [2024-07-12 17:35:28.483735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.974 [2024-07-12 17:35:28.483750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.974 qpair failed and we were unable to recover it. 00:27:09.974 [2024-07-12 17:35:28.493824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.974 [2024-07-12 17:35:28.493889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.974 [2024-07-12 17:35:28.493904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.974 [2024-07-12 17:35:28.493910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.974 [2024-07-12 17:35:28.493916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.974 [2024-07-12 17:35:28.493931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.974 qpair failed and we were unable to recover it. 00:27:09.974 [2024-07-12 17:35:28.503760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.974 [2024-07-12 17:35:28.503815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.974 [2024-07-12 17:35:28.503829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.503836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.503841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.503856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.513840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.513905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.513919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.513926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.513932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.513946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.523841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.523946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.523961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.523968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.523974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.523987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.533800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.533857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.533871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.533878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.533884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.533898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.543849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.543900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.543915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.543923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.543928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.543943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.553849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.553905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.553919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.553926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.553932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.553946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.563865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.563925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.563939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.563949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.563955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.563970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.573884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.573937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.573951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.573958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.573964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.573979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.583909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.583968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.583982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.583989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.583994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.584008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.593987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.594039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.594053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.594060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.594065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.594079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.603985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.604082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.604096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.604103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.604109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.604123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.614040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.614102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.614116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.614123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.614128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.614143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.624021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.624080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.624122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.624128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.624134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.624149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.634057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.634113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.634127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.634134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.634140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.975 [2024-07-12 17:35:28.634154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.975 qpair failed and we were unable to recover it. 00:27:09.975 [2024-07-12 17:35:28.644144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.975 [2024-07-12 17:35:28.644248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.975 [2024-07-12 17:35:28.644263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.975 [2024-07-12 17:35:28.644269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.975 [2024-07-12 17:35:28.644275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.644290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.654105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.654166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.654183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.654189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.654195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.654209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.664125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.664181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.664195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.664201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.664207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.664221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.674092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.674149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.674164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.674171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.674177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.674191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.684129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.684188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.684202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.684211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.684220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.684235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.694216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.694275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.694289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.694296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.694302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.694319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.704185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.704243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.704258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.704265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.704271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.704284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.714279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.714363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.714381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.714388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.714394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.714408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.724313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.724372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.724391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.724398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.724404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.724418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.734267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.734327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.734341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.734348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.734353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.734368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:09.976 [2024-07-12 17:35:28.744359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:09.976 [2024-07-12 17:35:28.744423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:09.976 [2024-07-12 17:35:28.744441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:09.976 [2024-07-12 17:35:28.744447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:09.976 [2024-07-12 17:35:28.744453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:09.976 [2024-07-12 17:35:28.744467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:09.976 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.754399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.754456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.236 [2024-07-12 17:35:28.754471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.236 [2024-07-12 17:35:28.754477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.236 [2024-07-12 17:35:28.754483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.236 [2024-07-12 17:35:28.754499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.236 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.764460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.764523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.236 [2024-07-12 17:35:28.764537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.236 [2024-07-12 17:35:28.764543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.236 [2024-07-12 17:35:28.764549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.236 [2024-07-12 17:35:28.764564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.236 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.774443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.774502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.236 [2024-07-12 17:35:28.774517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.236 [2024-07-12 17:35:28.774524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.236 [2024-07-12 17:35:28.774529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.236 [2024-07-12 17:35:28.774544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.236 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.784456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.784519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.236 [2024-07-12 17:35:28.784534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.236 [2024-07-12 17:35:28.784541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.236 [2024-07-12 17:35:28.784550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.236 [2024-07-12 17:35:28.784565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.236 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.794508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.794565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.236 [2024-07-12 17:35:28.794580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.236 [2024-07-12 17:35:28.794586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.236 [2024-07-12 17:35:28.794592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.236 [2024-07-12 17:35:28.794607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.236 qpair failed and we were unable to recover it. 00:27:10.236 [2024-07-12 17:35:28.804564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.236 [2024-07-12 17:35:28.804639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.804654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.804660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.804666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.804680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.814567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.814619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.814633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.814640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.814646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.814660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.824639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.824696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.824711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.824718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.824723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.824737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.834599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.834684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.834698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.834705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.834710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.834724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.844591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.844647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.844662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.844669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.844674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.844688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.854683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.854739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.854753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.854759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.854765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.854779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.864660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.864718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.864733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.864739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.864745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.864759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.874677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.874738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.874753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.874759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.874770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.874785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.884718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.884774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.884788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.884795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.884800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.884815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.894761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.894822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.894836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.894843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.894849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.894864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.904861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.904917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.904932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.904940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.904946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.904961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.914842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.914899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.914913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.914919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.914925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.914939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.924884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.924943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.924957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.924964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.924970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.924984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.934927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.934986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.935000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.935007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.237 [2024-07-12 17:35:28.935013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.237 [2024-07-12 17:35:28.935027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.237 qpair failed and we were unable to recover it. 00:27:10.237 [2024-07-12 17:35:28.944938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.237 [2024-07-12 17:35:28.944995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.237 [2024-07-12 17:35:28.945014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.237 [2024-07-12 17:35:28.945020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.945026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.945039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:28.954973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:28.955030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:28.955044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:28.955050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.955056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.955071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:28.964950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:28.965009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:28.965024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:28.965033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.965039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.965053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:28.975017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:28.975077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:28.975092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:28.975098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.975104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.975118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:28.985023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:28.985088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:28.985102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:28.985109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.985114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.985129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:28.995013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:28.995109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:28.995125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:28.995132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:28.995138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:28.995152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.238 [2024-07-12 17:35:29.005052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.238 [2024-07-12 17:35:29.005111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.238 [2024-07-12 17:35:29.005126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.238 [2024-07-12 17:35:29.005133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.238 [2024-07-12 17:35:29.005139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.238 [2024-07-12 17:35:29.005153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.238 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.015158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.015217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.015232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.015238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.015244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.015259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.025141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.025198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.025212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.025219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.025225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.025239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.035247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.035308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.035322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.035329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.035335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.035350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.045175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.045233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.045247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.045253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.045259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.045273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.055184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.055248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.055266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.055272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.055278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.055292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.065235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.065293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.065308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.065315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.065320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.065335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.075243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.075302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.075317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.075324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.075330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.075345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.085264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.085329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.085344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.085350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.085356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.085370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.499 qpair failed and we were unable to recover it. 00:27:10.499 [2024-07-12 17:35:29.095296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.499 [2024-07-12 17:35:29.095383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.499 [2024-07-12 17:35:29.095398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.499 [2024-07-12 17:35:29.095405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.499 [2024-07-12 17:35:29.095411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.499 [2024-07-12 17:35:29.095428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.105337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.105397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.105411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.105418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.105423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.105438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.115406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.115465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.115479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.115486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.115491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.115506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.125459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.125516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.125531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.125537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.125543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.125557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.135488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.135546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.135560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.135567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.135572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.135586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.145507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.145564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.145582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.145588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.145594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.145608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.155539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.155594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.155608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.155614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.155620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.155634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.165596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.165681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.165695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.165702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.165708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.165724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.175629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.175692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.175706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.175713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.175720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.175735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.185573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.185633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.185647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.185654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.185660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.185677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.195592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.195650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.195665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.195671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.195677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.195691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.205694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.205755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.205770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.205777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.205782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.205796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.215720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.215780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.215794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.215800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.215806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.215820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.225745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.225797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.225811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.225818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.225824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.225838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.235764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.500 [2024-07-12 17:35:29.235818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.500 [2024-07-12 17:35:29.235832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.500 [2024-07-12 17:35:29.235838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.500 [2024-07-12 17:35:29.235844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.500 [2024-07-12 17:35:29.235858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.500 qpair failed and we were unable to recover it. 00:27:10.500 [2024-07-12 17:35:29.245807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.501 [2024-07-12 17:35:29.245864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.501 [2024-07-12 17:35:29.245879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.501 [2024-07-12 17:35:29.245886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.501 [2024-07-12 17:35:29.245891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.501 [2024-07-12 17:35:29.245905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.501 qpair failed and we were unable to recover it. 00:27:10.501 [2024-07-12 17:35:29.255822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.501 [2024-07-12 17:35:29.255882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.501 [2024-07-12 17:35:29.255897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.501 [2024-07-12 17:35:29.255903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.501 [2024-07-12 17:35:29.255909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.501 [2024-07-12 17:35:29.255923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.501 qpair failed and we were unable to recover it. 00:27:10.501 [2024-07-12 17:35:29.265849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.501 [2024-07-12 17:35:29.265905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.501 [2024-07-12 17:35:29.265920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.501 [2024-07-12 17:35:29.265927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.501 [2024-07-12 17:35:29.265933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.501 [2024-07-12 17:35:29.265947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.501 qpair failed and we were unable to recover it. 00:27:10.501 [2024-07-12 17:35:29.275930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.501 [2024-07-12 17:35:29.275988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.501 [2024-07-12 17:35:29.276002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.501 [2024-07-12 17:35:29.276009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.501 [2024-07-12 17:35:29.276018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.501 [2024-07-12 17:35:29.276033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.501 qpair failed and we were unable to recover it. 00:27:10.761 [2024-07-12 17:35:29.285901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.761 [2024-07-12 17:35:29.285960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.761 [2024-07-12 17:35:29.285975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.761 [2024-07-12 17:35:29.285982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.761 [2024-07-12 17:35:29.285988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.761 [2024-07-12 17:35:29.286002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.761 qpair failed and we were unable to recover it. 00:27:10.761 [2024-07-12 17:35:29.295949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.761 [2024-07-12 17:35:29.296009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.761 [2024-07-12 17:35:29.296024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.761 [2024-07-12 17:35:29.296030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.761 [2024-07-12 17:35:29.296036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.761 [2024-07-12 17:35:29.296051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.761 qpair failed and we were unable to recover it. 00:27:10.761 [2024-07-12 17:35:29.305964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.761 [2024-07-12 17:35:29.306022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.761 [2024-07-12 17:35:29.306036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.761 [2024-07-12 17:35:29.306043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.761 [2024-07-12 17:35:29.306048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.761 [2024-07-12 17:35:29.306063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.761 qpair failed and we were unable to recover it. 00:27:10.761 [2024-07-12 17:35:29.315920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.761 [2024-07-12 17:35:29.315976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.761 [2024-07-12 17:35:29.315990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.761 [2024-07-12 17:35:29.315997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.761 [2024-07-12 17:35:29.316002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.316017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.326018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.326105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.326120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.326126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.326132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.326146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.336048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.336107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.336122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.336128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.336134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.336149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.346096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.346153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.346169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.346176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.346182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.346196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.356141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.356200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.356214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.356220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.356226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.356241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.366141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.366200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.366214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.366224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.366230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.366244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.376175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.376233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.376247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.376254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.376260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.376275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.386195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.386249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.386263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.386270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.386276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.386290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.396219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.396274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.396289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.396295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.396301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.396315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.406259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.406318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.406331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.406338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.406344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.406358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.416282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.416337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.416351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.416358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.416363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.416383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.426299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.426355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.426369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.426375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.426384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.426399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.436301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.436360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.436374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.762 [2024-07-12 17:35:29.436384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.762 [2024-07-12 17:35:29.436390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.762 [2024-07-12 17:35:29.436405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.762 qpair failed and we were unable to recover it. 00:27:10.762 [2024-07-12 17:35:29.446418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.762 [2024-07-12 17:35:29.446477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.762 [2024-07-12 17:35:29.446491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.446498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.446504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.446519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.456403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.456465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.456483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.456490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.456495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.456510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.466428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.466487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.466502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.466509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.466514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.466529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.476446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.476501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.476515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.476522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.476527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.476542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.486489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.486548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.486563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.486569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.486575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.486590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.496492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.496548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.496562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.496569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.496574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.496588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.506542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.506638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.506652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.506659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.506665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.506679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.516568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.516628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.516642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.516649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.516654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.516668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.526605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.526660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.526675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.526682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.526687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.526702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:10.763 [2024-07-12 17:35:29.536650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:10.763 [2024-07-12 17:35:29.536713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:10.763 [2024-07-12 17:35:29.536728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:10.763 [2024-07-12 17:35:29.536734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.763 [2024-07-12 17:35:29.536740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:10.763 [2024-07-12 17:35:29.536754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:10.763 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.546586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.546643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.546660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.546666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.546672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.546686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.556715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.556768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.556782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.556789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.556795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.556809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.566643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.566701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.566716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.566722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.566728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.566742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.576807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.576893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.576907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.576913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.576919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.576933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.586764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.586823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.586837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.586844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.586849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.586866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.596798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.596848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.596862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.596869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.596875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.596889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.606847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.606903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.606917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.606924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.606930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.606945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.616859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.616917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.616931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.616938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.616943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.616958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.626888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.626946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.626960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.626967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.626973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.023 [2024-07-12 17:35:29.626986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.023 qpair failed and we were unable to recover it. 00:27:11.023 [2024-07-12 17:35:29.636910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.023 [2024-07-12 17:35:29.636968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.023 [2024-07-12 17:35:29.636985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.023 [2024-07-12 17:35:29.636991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.023 [2024-07-12 17:35:29.636997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.637011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.646955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.647034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.647048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.647055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.647060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.647074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.656977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.657035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.657050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.657056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.657062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.657077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.667004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.667061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.667074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.667081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.667086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.667101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.677039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.677091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.677106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.677113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.677121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.677136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.687029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.687085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.687100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.687106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.687112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.687126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.697111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.697176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.697190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.697197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.697203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.697217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.707134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.707197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.707212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.707218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.707224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.707238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.717075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.717132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.717146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.717152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.717158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.717172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.727179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.727241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.727256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.727262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.727268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.727281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.737252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.737352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.737367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.737373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.737383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.737397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.747263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.747366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.747384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.747391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.747397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.747411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.757255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.757313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.757327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.757334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.757340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.757354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.767258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.767318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.767333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.767343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.767349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.767363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.777305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.024 [2024-07-12 17:35:29.777366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.024 [2024-07-12 17:35:29.777383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.024 [2024-07-12 17:35:29.777390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.024 [2024-07-12 17:35:29.777396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.024 [2024-07-12 17:35:29.777411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.024 qpair failed and we were unable to recover it. 00:27:11.024 [2024-07-12 17:35:29.787326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.025 [2024-07-12 17:35:29.787387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.025 [2024-07-12 17:35:29.787402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.025 [2024-07-12 17:35:29.787408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.025 [2024-07-12 17:35:29.787414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.025 [2024-07-12 17:35:29.787428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.025 qpair failed and we were unable to recover it. 00:27:11.025 [2024-07-12 17:35:29.797414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.025 [2024-07-12 17:35:29.797468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.025 [2024-07-12 17:35:29.797482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.025 [2024-07-12 17:35:29.797489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.025 [2024-07-12 17:35:29.797494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.025 [2024-07-12 17:35:29.797508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.025 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.807387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.807447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.807462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.807468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.807474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.807489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.817439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.817494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.817508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.817514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.817520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.817534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.827482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.827538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.827552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.827559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.827565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.827578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.837493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.837553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.837567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.837573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.837579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.837593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.847532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.847589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.847604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.847610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.847616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.847630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.857605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.857663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.857678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.857687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.857693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.857706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.867645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.867698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.867712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.867719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.867725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.867739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.877601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.877659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.877673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.877679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.877685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.877699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.284 [2024-07-12 17:35:29.887660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.284 [2024-07-12 17:35:29.887718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.284 [2024-07-12 17:35:29.887732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.284 [2024-07-12 17:35:29.887738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.284 [2024-07-12 17:35:29.887744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.284 [2024-07-12 17:35:29.887758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.284 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.897673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.897732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.897747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.897753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.897759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.897773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.907706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.907764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.907778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.907784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.907790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.907804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.917730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.917784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.917798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.917805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.917811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.917825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.927806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.927861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.927875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.927882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.927888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.927902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.937813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.937873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.937886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.937893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.937899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.937913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.947814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.947867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.947884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.947891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.947896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.947912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.957839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.957897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.957911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.957918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.957923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.957938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.967880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.967938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.967952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.967959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.967965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.967980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.977892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.977951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.977965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.977972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.977977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.977992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.987926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.987985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.987999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.988006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.988012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.988030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:29.997903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:29.997959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:29.997973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:29.997980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:29.997986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:29.998001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:30.007993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:30.008050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:30.008064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:30.008070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:30.008076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:30.008091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:30.018303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:30.018400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:30.018421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:30.018430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:30.018437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:30.018458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:30.028200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:30.028264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.285 [2024-07-12 17:35:30.028282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.285 [2024-07-12 17:35:30.028289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.285 [2024-07-12 17:35:30.028296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.285 [2024-07-12 17:35:30.028313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.285 qpair failed and we were unable to recover it. 00:27:11.285 [2024-07-12 17:35:30.038097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.285 [2024-07-12 17:35:30.038157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.286 [2024-07-12 17:35:30.038176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.286 [2024-07-12 17:35:30.038182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.286 [2024-07-12 17:35:30.038188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.286 [2024-07-12 17:35:30.038203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.286 qpair failed and we were unable to recover it. 00:27:11.286 [2024-07-12 17:35:30.048144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.286 [2024-07-12 17:35:30.048221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.286 [2024-07-12 17:35:30.048236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.286 [2024-07-12 17:35:30.048242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.286 [2024-07-12 17:35:30.048248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.286 [2024-07-12 17:35:30.048262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.286 qpair failed and we were unable to recover it. 00:27:11.286 [2024-07-12 17:35:30.058144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.286 [2024-07-12 17:35:30.058202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.286 [2024-07-12 17:35:30.058217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.286 [2024-07-12 17:35:30.058224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.286 [2024-07-12 17:35:30.058229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.286 [2024-07-12 17:35:30.058243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.286 qpair failed and we were unable to recover it. 00:27:11.546 [2024-07-12 17:35:30.068167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.546 [2024-07-12 17:35:30.068226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.546 [2024-07-12 17:35:30.068241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.546 [2024-07-12 17:35:30.068247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.546 [2024-07-12 17:35:30.068253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.546 [2024-07-12 17:35:30.068268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.546 qpair failed and we were unable to recover it. 00:27:11.546 [2024-07-12 17:35:30.078205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.546 [2024-07-12 17:35:30.078278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.546 [2024-07-12 17:35:30.078297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.546 [2024-07-12 17:35:30.078304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.546 [2024-07-12 17:35:30.078316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.546 [2024-07-12 17:35:30.078333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.546 qpair failed and we were unable to recover it. 00:27:11.546 [2024-07-12 17:35:30.088283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.546 [2024-07-12 17:35:30.088344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.546 [2024-07-12 17:35:30.088359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.546 [2024-07-12 17:35:30.088366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.546 [2024-07-12 17:35:30.088372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.546 [2024-07-12 17:35:30.088392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.546 qpair failed and we were unable to recover it. 00:27:11.546 [2024-07-12 17:35:30.098193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.546 [2024-07-12 17:35:30.098254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.546 [2024-07-12 17:35:30.098269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.546 [2024-07-12 17:35:30.098276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.546 [2024-07-12 17:35:30.098282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.546 [2024-07-12 17:35:30.098296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.546 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.108313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.108368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.108389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.108395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.108401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.108415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.118317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.118372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.118391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.118398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.118404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.118419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.128349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.128416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.128432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.128438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.128444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.128458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.138365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.138431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.138446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.138453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.138458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.138473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.148413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.148490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.148505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.148511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.148517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.148531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.158443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.158503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.158518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.158524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.158530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.158545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.168510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.168572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.168588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.168599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.168606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.168621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.178525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.178584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.178598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.178605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.178610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.178625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.188467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.188526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.188540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.188547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.188553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.188567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.198565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.198619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.198633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.198640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.198646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.198660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.208545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.208604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.208620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.208627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.208635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.208650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.218577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.218641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.218656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.218662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.218668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.218682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.228592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.228650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.228665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.228672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.228677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.228692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.238634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.238687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.547 [2024-07-12 17:35:30.238702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.547 [2024-07-12 17:35:30.238709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.547 [2024-07-12 17:35:30.238714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.547 [2024-07-12 17:35:30.238729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.547 qpair failed and we were unable to recover it. 00:27:11.547 [2024-07-12 17:35:30.248673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.547 [2024-07-12 17:35:30.248731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.248746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.248752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.248758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.248772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.258685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.258745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.258759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.258772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.258779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.258793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.268721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.268779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.268793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.268800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.268806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.268820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.278804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.278860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.278875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.278882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.278887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.278901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.288782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.288844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.288859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.288866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.288872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.288886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.298803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.298864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.298879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.298886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.298892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.298907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.308832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.308893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.308907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.308914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.308920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.308934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.548 [2024-07-12 17:35:30.318859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.548 [2024-07-12 17:35:30.318915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.548 [2024-07-12 17:35:30.318930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.548 [2024-07-12 17:35:30.318936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.548 [2024-07-12 17:35:30.318942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.548 [2024-07-12 17:35:30.318957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.548 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.328893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.328950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.328965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.328972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.328978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.328992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.338945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.339004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.339018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.339025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.339031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.339045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.348951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.349012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.349030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.349037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.349042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.349057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.358991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.359050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.359064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.359070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.359076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.359090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.369097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.369155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.369170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.369177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.369183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.369197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.379113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.379171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.379186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.379192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.379198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.379212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.389057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.389115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.389129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.389135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.389142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.389159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.399173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.399240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.399255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.399262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.399267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.399281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.409197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.409252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.807 [2024-07-12 17:35:30.409267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.807 [2024-07-12 17:35:30.409273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.807 [2024-07-12 17:35:30.409279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.807 [2024-07-12 17:35:30.409293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.807 qpair failed and we were unable to recover it. 00:27:11.807 [2024-07-12 17:35:30.419233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.807 [2024-07-12 17:35:30.419296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.419311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.419318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.419324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.419338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.429236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.429295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.429310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.429317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.429322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.429336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.439341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.439421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.439440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.439446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.439452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.439467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.449350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.449412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.449426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.449433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.449439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.449453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.459332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.459396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.459411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.459418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.459424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.459438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.469429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.469493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.469508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.469515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.469521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.469535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.479330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.479391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.479406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.479412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.479421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.479436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.489369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.489430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.489444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.489451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.489457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.489471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.499726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.499797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.499812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.499818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.499824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.499839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.509502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.509565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.509579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.509585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.509591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.509605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.519497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.519555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.519569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.519575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.519581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.519595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.529541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.529606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.529621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.529628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.529633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.529647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.539547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.539606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.539621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.539627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.539632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.539646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.549547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.549610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.549624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.549631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.549637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.549651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.559625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.559680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.559694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.559701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.559706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.559720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.569668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.569727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.569742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.569748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.569757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.569772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:11.808 [2024-07-12 17:35:30.579728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:11.808 [2024-07-12 17:35:30.579781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:11.808 [2024-07-12 17:35:30.579796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:11.808 [2024-07-12 17:35:30.579803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:11.808 [2024-07-12 17:35:30.579808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:11.808 [2024-07-12 17:35:30.579823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:11.808 qpair failed and we were unable to recover it. 00:27:12.066 [2024-07-12 17:35:30.589649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.066 [2024-07-12 17:35:30.589706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.066 [2024-07-12 17:35:30.589720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.066 [2024-07-12 17:35:30.589726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.066 [2024-07-12 17:35:30.589733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:12.066 [2024-07-12 17:35:30.589747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.066 qpair failed and we were unable to recover it. 00:27:12.066 [2024-07-12 17:35:30.599733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.066 [2024-07-12 17:35:30.599788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.066 [2024-07-12 17:35:30.599802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.066 [2024-07-12 17:35:30.599808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.066 [2024-07-12 17:35:30.599815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:12.067 [2024-07-12 17:35:30.599830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.609773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.609830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.609844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.609850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.609856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4a84000b90 00:27:12.067 [2024-07-12 17:35:30.609870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.619809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.619876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.619903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.619914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.619922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.619944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.629847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.629901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.629917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.629924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.629930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.629943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.639901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.639961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.639977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.639984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.639989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.640003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.649905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.649983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.649998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.650005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.650011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.650025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.659906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.659968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.659983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.659993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.659999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.660013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.669971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.670027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.670042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.670049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.670054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.670068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.679942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.680001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.680016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.680023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.680029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.680042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.690051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.690108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.690123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.690129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.690135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.690149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.700012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.700070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.700086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.700092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.700098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.700111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.710052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.710109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.710124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.710131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.710137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.710150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.720077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.720131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.720146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.720153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.720159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.720173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.730188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.730248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.730263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.730269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.730275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.730289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.740140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.740203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.740217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.740224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.740230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.740243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.750171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.750227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.750241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.750251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.750257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.750271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.760251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.760309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.760324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.760331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.760336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.760350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.770283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.770341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.067 [2024-07-12 17:35:30.770356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.067 [2024-07-12 17:35:30.770362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.067 [2024-07-12 17:35:30.770368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.067 [2024-07-12 17:35:30.770386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.067 qpair failed and we were unable to recover it. 00:27:12.067 [2024-07-12 17:35:30.780249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.067 [2024-07-12 17:35:30.780306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.780321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.780328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.780333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.780347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.790287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.790345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.790360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.790367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.790373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.790391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.800317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.800382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.800397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.800404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.800409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.800423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.810348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.810408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.810423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.810430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.810435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.810449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.820366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.820431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.820446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.820452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.820458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.820471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.830403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.830458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.830473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.830480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.830485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.830499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.068 [2024-07-12 17:35:30.840432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.068 [2024-07-12 17:35:30.840488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.068 [2024-07-12 17:35:30.840506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.068 [2024-07-12 17:35:30.840513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.068 [2024-07-12 17:35:30.840518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.068 [2024-07-12 17:35:30.840532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.068 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.850474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.850533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.850548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.850555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.850560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.850575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.860512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.860595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.860611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.860617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.860623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.860636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.870506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.870565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.870580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.870586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.870592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.870606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.880492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.880553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.880568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.880575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.880580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.880594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.890589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.890647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.890662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.890669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.890674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.890688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.900645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.900718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.900733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.900740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.900745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.900759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.910643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.910703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.910717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.910724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.910729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.910743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.920673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.920731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.920745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.920752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.920757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.920771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.930724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.930780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.930798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.930805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.930810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.930824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.940789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.940852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.940867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.940873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.328 [2024-07-12 17:35:30.940878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.328 [2024-07-12 17:35:30.940893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.328 qpair failed and we were unable to recover it. 00:27:12.328 [2024-07-12 17:35:30.950743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.328 [2024-07-12 17:35:30.950799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.328 [2024-07-12 17:35:30.950813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.328 [2024-07-12 17:35:30.950820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:30.950826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:30.950839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:30.960773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:30.960834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:30.960849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:30.960855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:30.960861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:30.960875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:30.970850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:30.970907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:30.970924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:30.970933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:30.970941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:30.970959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:30.980854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:30.980927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:30.980943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:30.980949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:30.980955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:30.980968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:30.990868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:30.990925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:30.990940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:30.990946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:30.990952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:30.990965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.000912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.001007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.001022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.001029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.001034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.001048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.010926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.010987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.011002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.011009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.011015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.011028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.020920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.020982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.021000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.021007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.021013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.021026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.030971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.031069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.031083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.031089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.031096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.031109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.040999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.041054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.041068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.041075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.041081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.041094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.051046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.051106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.051122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.051130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.051136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.051150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.061056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.061115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.061130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.061137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.061143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.061160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.071037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.071095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.071111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.071117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.071123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.071137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.081141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.081223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.081239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.081246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.081252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.081266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.329 qpair failed and we were unable to recover it. 00:27:12.329 [2024-07-12 17:35:31.091193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.329 [2024-07-12 17:35:31.091250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.329 [2024-07-12 17:35:31.091265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.329 [2024-07-12 17:35:31.091272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.329 [2024-07-12 17:35:31.091278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.329 [2024-07-12 17:35:31.091291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.330 qpair failed and we were unable to recover it. 00:27:12.330 [2024-07-12 17:35:31.101180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.330 [2024-07-12 17:35:31.101284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.330 [2024-07-12 17:35:31.101300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.330 [2024-07-12 17:35:31.101307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.330 [2024-07-12 17:35:31.101312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.330 [2024-07-12 17:35:31.101326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.330 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.111233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.111292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.111312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.111318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.111324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.111337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.121241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.121296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.121312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.121319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.121324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.121338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.131210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.131268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.131283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.131289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.131295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.131309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.141317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.141387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.141403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.141409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.141415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.141429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.151327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.151388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.151403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.151409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.151417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.151431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.161354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.161429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.161445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.161451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.161457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.161470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.171398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.171455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.171470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.171476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.171482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.171495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.181364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.181425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.181440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.181446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.181453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.181466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.191442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.590 [2024-07-12 17:35:31.191536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.590 [2024-07-12 17:35:31.191551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.590 [2024-07-12 17:35:31.191557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.590 [2024-07-12 17:35:31.191563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.590 [2024-07-12 17:35:31.191577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.590 qpair failed and we were unable to recover it. 00:27:12.590 [2024-07-12 17:35:31.201470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.201530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.201545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.201552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.201558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.201571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.211448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.211508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.211523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.211530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.211536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.211549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.221533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.221619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.221634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.221640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.221646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.221659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.231539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.231605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.231619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.231626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.231631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.231646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.241515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.241574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.241589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.241595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.241604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.241618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.251644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.251704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.251719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.251725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.251731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.251745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.261621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.261680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.261694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.261701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.261707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.261720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.271674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.271732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.271746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.271753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.271759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.271772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.281754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.281818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.281833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.281840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.281846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.281860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.291744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.291808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.291823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.291829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.291835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.291849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.301781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.301841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.301857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.301864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.301870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.301884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.311800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.311857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.311872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.311878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.311884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.311897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.321830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.321884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.321899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.321905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.321911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.321924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.331893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.331951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.331966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.331973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.331981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.591 [2024-07-12 17:35:31.331995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.591 qpair failed and we were unable to recover it. 00:27:12.591 [2024-07-12 17:35:31.341885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.591 [2024-07-12 17:35:31.341942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.591 [2024-07-12 17:35:31.341960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.591 [2024-07-12 17:35:31.341967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.591 [2024-07-12 17:35:31.341973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.592 [2024-07-12 17:35:31.341987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-07-12 17:35:31.351915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.592 [2024-07-12 17:35:31.351967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.592 [2024-07-12 17:35:31.351984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.592 [2024-07-12 17:35:31.351991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.592 [2024-07-12 17:35:31.351997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.592 [2024-07-12 17:35:31.352011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.592 [2024-07-12 17:35:31.361944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.592 [2024-07-12 17:35:31.362000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.592 [2024-07-12 17:35:31.362015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.592 [2024-07-12 17:35:31.362022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.592 [2024-07-12 17:35:31.362028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.592 [2024-07-12 17:35:31.362042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.592 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.371985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.372041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.372057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.372064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.372070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.372084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.381986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.382045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.382061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.382067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.382073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.382086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.392039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.392110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.392125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.392132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.392138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.392151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.402059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.402111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.402125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.402132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.402138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.402152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.412098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.412156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.412171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.412177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.412183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.412196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.422149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.422227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.422242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.422251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.422257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.422271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.432193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.432269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.432285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.432291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.432296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.432310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.442170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.442228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.442243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.442249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.442255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.442269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.452213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.452271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.452285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.452292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.452297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.452311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.462227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.462311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.462326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.462333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.462338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.462352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.472281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.472338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.472353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.472360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.472366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.472387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.482254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.482356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.482372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.852 [2024-07-12 17:35:31.482384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.852 [2024-07-12 17:35:31.482391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.852 [2024-07-12 17:35:31.482405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.852 qpair failed and we were unable to recover it. 00:27:12.852 [2024-07-12 17:35:31.492302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.852 [2024-07-12 17:35:31.492361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.852 [2024-07-12 17:35:31.492381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.492388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.492394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.492407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.502268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.502328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.502343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.502349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.502354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.502368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.512370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.512434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.512449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.512458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.512464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.512477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.522400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.522467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.522482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.522488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.522494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.522507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.532425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.532488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.532503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.532509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.532515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.532528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.542491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.542574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.542589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.542595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.542601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.542614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.552442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.552503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.552518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.552524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.552530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.552544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.562477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.562581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.562604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.562611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.562617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.562631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.572472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.572530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.572545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.572552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.572557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.572571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.582559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.582620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.582635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.582641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.582647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.582661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.592628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.592689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.592705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.592711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.592717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.592731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.602634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.602691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.602709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.602716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.602722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.602735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.612677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.612750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.612765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.612772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.612778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.612791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:12.853 [2024-07-12 17:35:31.622696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:12.853 [2024-07-12 17:35:31.622755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:12.853 [2024-07-12 17:35:31.622770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:12.853 [2024-07-12 17:35:31.622776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:12.853 [2024-07-12 17:35:31.622782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:12.853 [2024-07-12 17:35:31.622796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.853 qpair failed and we were unable to recover it. 00:27:13.113 [2024-07-12 17:35:31.632715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.113 [2024-07-12 17:35:31.632776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.113 [2024-07-12 17:35:31.632790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.113 [2024-07-12 17:35:31.632797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.113 [2024-07-12 17:35:31.632803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.113 [2024-07-12 17:35:31.632816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-07-12 17:35:31.642739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.113 [2024-07-12 17:35:31.642799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.113 [2024-07-12 17:35:31.642814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.113 [2024-07-12 17:35:31.642821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.113 [2024-07-12 17:35:31.642826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.113 [2024-07-12 17:35:31.642840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-07-12 17:35:31.652809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.113 [2024-07-12 17:35:31.652885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.113 [2024-07-12 17:35:31.652900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.113 [2024-07-12 17:35:31.652907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.113 [2024-07-12 17:35:31.652913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.113 [2024-07-12 17:35:31.652926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.113 [2024-07-12 17:35:31.662800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.113 [2024-07-12 17:35:31.662856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.113 [2024-07-12 17:35:31.662871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.113 [2024-07-12 17:35:31.662877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.113 [2024-07-12 17:35:31.662883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.113 [2024-07-12 17:35:31.662896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.113 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.672743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.672800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.672815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.672821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.672827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.672843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.682849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.682906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.682921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.682928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.682934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.682947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.692819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.692878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.692895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.692902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.692908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.692922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.702975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.703072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.703086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.703093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.703099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.703112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.712925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.712984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.712999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.713005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.713011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.713024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.722994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.723054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.723069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.723075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.723081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.723094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.732927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.732985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.733000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.733006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.733012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.733029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.742941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.742996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.743010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.743016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.743022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.743035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.753063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.753128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.753142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.753148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.753154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.753167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.763002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.763063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.763077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.763083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.763089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.763102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.773165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.773222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.773237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.773244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.773250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.773263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.783141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.783228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.783245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.783252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.783257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.783272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.793149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.793210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.793225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.793231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.793236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.793250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.803125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.803184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.803199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.114 [2024-07-12 17:35:31.803206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.114 [2024-07-12 17:35:31.803211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.114 [2024-07-12 17:35:31.803225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.114 qpair failed and we were unable to recover it. 00:27:13.114 [2024-07-12 17:35:31.813221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.114 [2024-07-12 17:35:31.813277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.114 [2024-07-12 17:35:31.813292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.813298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.813304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.813317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.823185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.823245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.823259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.823266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.823272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.823288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.833210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.833269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.833284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.833291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.833296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.833310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.843311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.843367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.843386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.843392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.843398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.843412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.853271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.853328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.853343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.853349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.853355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.853368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.863291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.863349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.863364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.863371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.863382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.863397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.873418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.873471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.873491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.873497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.873503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.873517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.115 [2024-07-12 17:35:31.883415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.115 [2024-07-12 17:35:31.883466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.115 [2024-07-12 17:35:31.883481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.115 [2024-07-12 17:35:31.883488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.115 [2024-07-12 17:35:31.883494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.115 [2024-07-12 17:35:31.883508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.115 qpair failed and we were unable to recover it. 00:27:13.376 [2024-07-12 17:35:31.893391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.376 [2024-07-12 17:35:31.893450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.376 [2024-07-12 17:35:31.893465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.376 [2024-07-12 17:35:31.893472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.376 [2024-07-12 17:35:31.893478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.376 [2024-07-12 17:35:31.893492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.376 qpair failed and we were unable to recover it. 00:27:13.376 [2024-07-12 17:35:31.903469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.376 [2024-07-12 17:35:31.903526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.376 [2024-07-12 17:35:31.903541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.376 [2024-07-12 17:35:31.903548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.376 [2024-07-12 17:35:31.903554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.376 [2024-07-12 17:35:31.903567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.376 qpair failed and we were unable to recover it. 00:27:13.376 [2024-07-12 17:35:31.913459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.376 [2024-07-12 17:35:31.913514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.376 [2024-07-12 17:35:31.913528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.376 [2024-07-12 17:35:31.913535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.376 [2024-07-12 17:35:31.913543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.376 [2024-07-12 17:35:31.913557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.376 qpair failed and we were unable to recover it. 00:27:13.376 [2024-07-12 17:35:31.923476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.376 [2024-07-12 17:35:31.923533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.376 [2024-07-12 17:35:31.923548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.376 [2024-07-12 17:35:31.923554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.376 [2024-07-12 17:35:31.923560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.376 [2024-07-12 17:35:31.923573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.376 qpair failed and we were unable to recover it. 00:27:13.376 [2024-07-12 17:35:31.933569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.933628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.933643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.933650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.933655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.933669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.943591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.943693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.943708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.943714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.943720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.943734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.953557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.953620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.953634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.953641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.953646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.953660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.963584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.963641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.963656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.963662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.963668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.963681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.973669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.973726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.973741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.973748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.973753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.973767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.983753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.983807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.983822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.983830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.983835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.983849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:31.993668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:31.993726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:31.993741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:31.993747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:31.993753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:31.993767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.003703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.003759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.003774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.003780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.003789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.003802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.013811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.013870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.013884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.013890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.013896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.013910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.023836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.023897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.023911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.023918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.023923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.023937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.033904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.033975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.033990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.033996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.034002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.034016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.043889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.043943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.043958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.043964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.043970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.043984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.053957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.054022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.054037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.054043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.054049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.054063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.063944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.064004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.064018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.064024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.377 [2024-07-12 17:35:32.064030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.377 [2024-07-12 17:35:32.064043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.377 qpair failed and we were unable to recover it. 00:27:13.377 [2024-07-12 17:35:32.073969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.377 [2024-07-12 17:35:32.074028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.377 [2024-07-12 17:35:32.074043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.377 [2024-07-12 17:35:32.074050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.074056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.074069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.083975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.084069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.084085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.084091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.084097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.084111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.094067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.094127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.094141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.094148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.094157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.094171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.104046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.104101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.104116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.104123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.104128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.104142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.114152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.114210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.114224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.114231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.114237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.114250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.124109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.124165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.124179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.124186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.124192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.124206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.134148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.134209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.134224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.134230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.134236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.134250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.144157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.144216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.144231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.144238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.144244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.144257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.378 [2024-07-12 17:35:32.154205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.378 [2024-07-12 17:35:32.154257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.378 [2024-07-12 17:35:32.154271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.378 [2024-07-12 17:35:32.154278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.378 [2024-07-12 17:35:32.154284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.378 [2024-07-12 17:35:32.154297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.378 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.164222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.164279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.164294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.164301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.164308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.164322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.174262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.174318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.174333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.174340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.174346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.174359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.184302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.184363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.184381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.184392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.184397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.184411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.194313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.194368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.194386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.194393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.194399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.194412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.204334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.204397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.204413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.204419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.204425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.204439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.214383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.214440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.214455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.214461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.214467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.214481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.224386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.224445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.224460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.224466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.224472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.224485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.234451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.234512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.234527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.234533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.234539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.234552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.244497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.244557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.244572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.244581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.244587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.244601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.254473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.254529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.254543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.254549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.254555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.254569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.264517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.264574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.264589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.264596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.640 [2024-07-12 17:35:32.264601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.640 [2024-07-12 17:35:32.264615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.640 qpair failed and we were unable to recover it. 00:27:13.640 [2024-07-12 17:35:32.274552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.640 [2024-07-12 17:35:32.274608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.640 [2024-07-12 17:35:32.274622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.640 [2024-07-12 17:35:32.274632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.274638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.274652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.284568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.284626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.284641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.284647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.284653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.284667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.294592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.294650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.294664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.294671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.294677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.294690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.304624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.304729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.304746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.304752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.304758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.304772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.314640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.314700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.314715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.314721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.314727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.314741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.324688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.324745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.324759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.324766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.324771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.324785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.334709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.334766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.334781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.334787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.334793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.334806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.344736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.344793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.344810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.344817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.344823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.344837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.354756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.354811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.354827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.354834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.354840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.354854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.364783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.364842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.364857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.364867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.364873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.364886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.374763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.374861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.374876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.374882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.374887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.374901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.384879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.384945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.384960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.384966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.384972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.384986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.394877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.394934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.394949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.394955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.394961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.394974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.404895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.404952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.404966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.404973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.404978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.404991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.641 [2024-07-12 17:35:32.414943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.641 [2024-07-12 17:35:32.415024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.641 [2024-07-12 17:35:32.415039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.641 [2024-07-12 17:35:32.415046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.641 [2024-07-12 17:35:32.415051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.641 [2024-07-12 17:35:32.415066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.641 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.424954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.425012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.425028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.425035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.425040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.425054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.434923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.434980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.434995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.435001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.435007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.435021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.445022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.445078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.445093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.445099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.445105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.445118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.455061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.455118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.455135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.455141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.455147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.455160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.465116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.465173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.465188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.465195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.465201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.465214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.475110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.475161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.475176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.475182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.475188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.475202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.485141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.485197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.903 [2024-07-12 17:35:32.485212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.903 [2024-07-12 17:35:32.485218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.903 [2024-07-12 17:35:32.485224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.903 [2024-07-12 17:35:32.485239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.903 qpair failed and we were unable to recover it. 00:27:13.903 [2024-07-12 17:35:32.495181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.903 [2024-07-12 17:35:32.495240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.495255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.495262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.495267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.495284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.505206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.505266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.505281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.505287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.505293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.505307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.515277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.515335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.515349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.515356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.515362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.515376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.525261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.525321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.525336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.525342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.525348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.525362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.535299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.535360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.535375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.535384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.535390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.535403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.545358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.545425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.545443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.545449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.545455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.545469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.555347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.555405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.555420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.555426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.555432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.555445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.565385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.565442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.565457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.565464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.565469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.565483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.575428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.575486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.575501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.575508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.575513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.575526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.585437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.585499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.585514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.585521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.585527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.585544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.595475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.595530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.595545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.595551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.595557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.595571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.605511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.605568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.605583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.605589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.605595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.605608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.615476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.615566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.615581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.615588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.615594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.615607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.625541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.625596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.625611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.625618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.625623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.904 [2024-07-12 17:35:32.625637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.904 qpair failed and we were unable to recover it. 00:27:13.904 [2024-07-12 17:35:32.635567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.904 [2024-07-12 17:35:32.635624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.904 [2024-07-12 17:35:32.635643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.904 [2024-07-12 17:35:32.635649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.904 [2024-07-12 17:35:32.635655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.905 [2024-07-12 17:35:32.635669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.905 qpair failed and we were unable to recover it. 00:27:13.905 [2024-07-12 17:35:32.645618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.905 [2024-07-12 17:35:32.645692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.905 [2024-07-12 17:35:32.645707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.905 [2024-07-12 17:35:32.645714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.905 [2024-07-12 17:35:32.645719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.905 [2024-07-12 17:35:32.645733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.905 qpair failed and we were unable to recover it. 00:27:13.905 [2024-07-12 17:35:32.655657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.905 [2024-07-12 17:35:32.655717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.905 [2024-07-12 17:35:32.655732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.905 [2024-07-12 17:35:32.655739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.905 [2024-07-12 17:35:32.655744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.905 [2024-07-12 17:35:32.655758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.905 qpair failed and we were unable to recover it. 00:27:13.905 [2024-07-12 17:35:32.665671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.905 [2024-07-12 17:35:32.665729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.905 [2024-07-12 17:35:32.665743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.905 [2024-07-12 17:35:32.665750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.905 [2024-07-12 17:35:32.665755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.905 [2024-07-12 17:35:32.665768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.905 qpair failed and we were unable to recover it. 00:27:13.905 [2024-07-12 17:35:32.675701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:13.905 [2024-07-12 17:35:32.675753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:13.905 [2024-07-12 17:35:32.675767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:13.905 [2024-07-12 17:35:32.675773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:13.905 [2024-07-12 17:35:32.675779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:13.905 [2024-07-12 17:35:32.675796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:13.905 qpair failed and we were unable to recover it. 00:27:14.165 [2024-07-12 17:35:32.685697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.165 [2024-07-12 17:35:32.685751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.165 [2024-07-12 17:35:32.685766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.165 [2024-07-12 17:35:32.685772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.165 [2024-07-12 17:35:32.685778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.165 [2024-07-12 17:35:32.685791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.165 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.695779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.695835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.695850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.695856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.695862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.695875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.705794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.705852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.705866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.705873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.705878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.705892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.715827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.715880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.715894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.715901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.715907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.715920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.725872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.725930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.725948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.725955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.725960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.725974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.735895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.735954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.735968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.735975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.735980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.735993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.745919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.745979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.745994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.746001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.746006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.746019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.755983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.756042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.756056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.756063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.756069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.756082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.765952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.766007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.766022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.766029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.766037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.766051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.775993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.776054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.776069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.776076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.776081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.776095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.786028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.786108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.786123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.786129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.786135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.786149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.796044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.796100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.796115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.796122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.796128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.796141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.806104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.806157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.806172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.806178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.806184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.806197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.816125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.816187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.816202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.816209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.816214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.816228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.826078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.826137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.826152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.166 [2024-07-12 17:35:32.826158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.166 [2024-07-12 17:35:32.826164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.166 [2024-07-12 17:35:32.826177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-07-12 17:35:32.836173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.166 [2024-07-12 17:35:32.836228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.166 [2024-07-12 17:35:32.836243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.836249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.836254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.836268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.846198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.846252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.846266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.846273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.846279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.846292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.856213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.856268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.856283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.856290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.856299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.856312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.866249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.866307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.866322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.866329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.866335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.866348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.876225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.876282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.876297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.876304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.876310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.876323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.886309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.886365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.886385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.886392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.886398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.886411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.896282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.896336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.896351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.896357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.896363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.896381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.906380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.906439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.906455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.906461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.906467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.906480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.916400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.916456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.916470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.916477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.916482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.916496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.926437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.926492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.926507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.926513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.926519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.926532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-07-12 17:35:32.936462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.167 [2024-07-12 17:35:32.936521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.167 [2024-07-12 17:35:32.936535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.167 [2024-07-12 17:35:32.936542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.167 [2024-07-12 17:35:32.936547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.167 [2024-07-12 17:35:32.936561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.946430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.946491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.946506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.946516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.946522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.946536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.956526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.956584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.956599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.956605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.956611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.956624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.966559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.966619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.966634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.966640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.966646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.966660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.976600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.976665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.976680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.976686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.976692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.976705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.986604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.986666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.986681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.986688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.986695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.986709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:32.996626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:32.996686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:32.996702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:32.996708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:32.996714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:32.996728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:33.006648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:33.006707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:33.006722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:33.006729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:33.006734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:33.006748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:33.016630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:33.016688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:33.016703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:33.016709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:33.016715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:33.016729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.428 qpair failed and we were unable to recover it. 00:27:14.428 [2024-07-12 17:35:33.026643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.428 [2024-07-12 17:35:33.026702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.428 [2024-07-12 17:35:33.026717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.428 [2024-07-12 17:35:33.026724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.428 [2024-07-12 17:35:33.026729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.428 [2024-07-12 17:35:33.026743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.036679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.036739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.036755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.036765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.036771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.036784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.046772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.046828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.046843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.046850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.046855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.046869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.056760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.056815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.056831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.056837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.056843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.056856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.066829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.066889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.066905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.066911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.066916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.066930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.076809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.076864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.076878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.076885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.076891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.076904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.086832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.086886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.086901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.086908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.086914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.086927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.096899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.096956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.096970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.096977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.096983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.096996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.106937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.106995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.107010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.107016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.107022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.107035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.116968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.117029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.117044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.117050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.117056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.117072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.126980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.127034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.127048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.127058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.127064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.127077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.137022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.137081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.429 [2024-07-12 17:35:33.137095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.429 [2024-07-12 17:35:33.137102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.429 [2024-07-12 17:35:33.137107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.429 [2024-07-12 17:35:33.137121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.429 qpair failed and we were unable to recover it. 00:27:14.429 [2024-07-12 17:35:33.147034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.429 [2024-07-12 17:35:33.147090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.147104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.147111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.147117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.147131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-07-12 17:35:33.157030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.430 [2024-07-12 17:35:33.157095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.157110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.157116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.157122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.157135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-07-12 17:35:33.167049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.430 [2024-07-12 17:35:33.167111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.167126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.167132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.167138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.167152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-07-12 17:35:33.177129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.430 [2024-07-12 17:35:33.177187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.177202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.177208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.177214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.177228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-07-12 17:35:33.187182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.430 [2024-07-12 17:35:33.187237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.187252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.187259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.187265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.187279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.430 [2024-07-12 17:35:33.197124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.430 [2024-07-12 17:35:33.197223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.430 [2024-07-12 17:35:33.197238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.430 [2024-07-12 17:35:33.197245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.430 [2024-07-12 17:35:33.197251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.430 [2024-07-12 17:35:33.197265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.430 qpair failed and we were unable to recover it. 00:27:14.690 [2024-07-12 17:35:33.207146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.207206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.207221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.207227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.207233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.207247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.217253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.217311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.217329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.217335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.217341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.217354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.227316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.227385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.227401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.227407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.227413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.227427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.237254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.237306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.237321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.237328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.237334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.237347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.247330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.247392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.247407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.247415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.247422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.247437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.257301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.257357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.257371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.257383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.257389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.257403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.267424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.267488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.267503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.267510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.267515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.267529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.277427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.277488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.277504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.277511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.277516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.277530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.287460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.287517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.287532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.287539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.287544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.287558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.297480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.297539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.297553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.297560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.297565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.297579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.307528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.307591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.307610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.307617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.307623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.307636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.317463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.317523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.317538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.317544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.317550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.317564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.327572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.327631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.327645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.327652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.327658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.327672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.337542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.337603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.337620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.691 [2024-07-12 17:35:33.337626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.691 [2024-07-12 17:35:33.337632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.691 [2024-07-12 17:35:33.337646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.691 qpair failed and we were unable to recover it. 00:27:14.691 [2024-07-12 17:35:33.347551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.691 [2024-07-12 17:35:33.347614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.691 [2024-07-12 17:35:33.347629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.347636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.347641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.347658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.357627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.357686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.357702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.357709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.357715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.357730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.367624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.367677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.367692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.367699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.367705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.367718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.377645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.377702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.377717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.377724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.377729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.377743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.387667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.387727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.387742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.387749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.387755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.387768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.397696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.397750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.397768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.397774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.397780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.397794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.407795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.407850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.407865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.407871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.407877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.407891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.417827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.417883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.417898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.417904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.417910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.417924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.427854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.427907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.427922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.427929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.427934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.427949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.437920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:14.692 [2024-07-12 17:35:33.438019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:14.692 [2024-07-12 17:35:33.438034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:14.692 [2024-07-12 17:35:33.438040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:14.692 [2024-07-12 17:35:33.438046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24deed0 00:27:14.692 [2024-07-12 17:35:33.438064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:14.692 qpair failed and we were unable to recover it. 00:27:14.692 [2024-07-12 17:35:33.438152] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:14.692 A controller has encountered a failure and is being reset. 00:27:14.951 Controller properly reset. 00:27:14.951 Initializing NVMe Controllers 00:27:14.951 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:14.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:14.951 Initialization complete. Launching workers. 00:27:14.951 Starting thread on core 1 00:27:14.951 Starting thread on core 2 00:27:14.951 Starting thread on core 3 00:27:14.951 Starting thread on core 0 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:14.951 00:27:14.951 real 0m11.264s 00:27:14.951 user 0m21.648s 00:27:14.951 sys 0m4.278s 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.951 ************************************ 00:27:14.951 END TEST nvmf_target_disconnect_tc2 00:27:14.951 ************************************ 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:14.951 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.952 rmmod nvme_tcp 00:27:14.952 rmmod nvme_fabrics 00:27:14.952 rmmod nvme_keyring 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 31762 ']' 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 31762 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 31762 ']' 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 31762 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31762 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31762' 00:27:14.952 killing process with pid 31762 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 31762 00:27:14.952 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 31762 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.218 17:35:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.757 17:35:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.757 00:27:17.757 real 0m19.351s 00:27:17.757 user 0m48.589s 00:27:17.757 sys 0m8.700s 00:27:17.757 17:35:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.757 17:35:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 ************************************ 00:27:17.757 END TEST nvmf_target_disconnect 00:27:17.757 ************************************ 00:27:17.757 17:35:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:17.757 17:35:36 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:17.757 17:35:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.757 17:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 17:35:36 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:17.757 00:27:17.757 real 20m54.034s 00:27:17.757 user 45m17.093s 00:27:17.757 sys 6m13.515s 00:27:17.757 17:35:36 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.757 17:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 ************************************ 00:27:17.757 END TEST nvmf_tcp 00:27:17.757 ************************************ 00:27:17.757 17:35:36 -- common/autotest_common.sh@1142 -- # return 0 00:27:17.757 17:35:36 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:17.757 17:35:36 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:17.757 17:35:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:17.757 17:35:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.757 17:35:36 -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 ************************************ 00:27:17.757 START TEST spdkcli_nvmf_tcp 00:27:17.757 ************************************ 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:17.757 * Looking for test storage... 00:27:17.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=33447 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 33447 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 33447 ']' 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.757 17:35:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.757 [2024-07-12 17:35:36.257506] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:27:17.757 [2024-07-12 17:35:36.257553] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33447 ] 00:27:17.757 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.757 [2024-07-12 17:35:36.312823] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:17.757 [2024-07-12 17:35:36.392594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.757 [2024-07-12 17:35:36.392598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.326 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.326 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:18.326 17:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:18.327 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.327 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 17:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:18.586 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:18.586 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:18.586 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:18.586 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:18.586 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:18.586 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:18.586 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.586 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.586 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:18.586 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:18.586 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:18.586 ' 00:27:21.121 [2024-07-12 17:35:39.492343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.058 [2024-07-12 17:35:40.668216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:24.591 [2024-07-12 17:35:42.830873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:25.995 [2024-07-12 17:35:44.688711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:27.372 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:27.372 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:27.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:27.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:27.372 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:27.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:27.372 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:27.631 17:35:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:27.890 17:35:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:28.150 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.151 17:35:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:28.151 17:35:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:28.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:28.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:28.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:28.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:28.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:28.151 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:28.151 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:28.151 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:28.151 ' 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:33.428 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:33.428 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:33.428 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:33.428 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 33447 ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 33447' 00:27:33.428 killing process with pid 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 33447 ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 33447 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 33447 ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 33447 00:27:33.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (33447) - No such process 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 33447 is not found' 00:27:33.428 Process with pid 33447 is not found 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:33.428 00:27:33.428 real 0m15.802s 00:27:33.428 user 0m32.798s 00:27:33.428 sys 0m0.677s 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.428 17:35:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.428 ************************************ 00:27:33.428 END TEST spdkcli_nvmf_tcp 00:27:33.428 ************************************ 00:27:33.428 17:35:51 -- common/autotest_common.sh@1142 -- # return 0 00:27:33.428 17:35:51 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:33.428 17:35:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.428 17:35:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.428 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:27:33.428 ************************************ 00:27:33.428 START TEST nvmf_identify_passthru 00:27:33.428 ************************************ 00:27:33.428 17:35:51 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:33.428 * Looking for test storage... 00:27:33.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.428 17:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.428 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.428 17:35:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.428 17:35:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.428 17:35:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.428 17:35:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.429 17:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.429 17:35:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.429 17:35:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.429 17:35:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:33.429 17:35:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.429 17:35:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.429 17:35:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:33.429 17:35:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.429 17:35:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.429 17:35:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.700 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.700 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.701 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:27:38.701 00:27:38.701 --- 10.0.0.2 ping statistics --- 00:27:38.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.701 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:27:38.701 00:27:38.701 --- 10.0.0.1 ping statistics --- 00:27:38.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.701 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.701 17:35:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:38.961 17:35:57 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:38.961 17:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:38.961 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.147 17:36:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:43.147 17:36:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:43.147 17:36:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:43.147 17:36:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:43.147 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=40437 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:47.350 17:36:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 40437 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 40437 ']' 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.350 17:36:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.350 [2024-07-12 17:36:05.990493] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:27:47.350 [2024-07-12 17:36:05.990540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.350 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.350 [2024-07-12 17:36:06.048938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.609 [2024-07-12 17:36:06.129847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.609 [2024-07-12 17:36:06.129884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.609 [2024-07-12 17:36:06.129891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.609 [2024-07-12 17:36:06.129898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.609 [2024-07-12 17:36:06.129903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.609 [2024-07-12 17:36:06.129954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.609 [2024-07-12 17:36:06.130047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.609 [2024-07-12 17:36:06.130066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.609 [2024-07-12 17:36:06.130067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:48.176 17:36:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 INFO: Log level set to 20 00:27:48.176 INFO: Requests: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "method": "nvmf_set_config", 00:27:48.176 "id": 1, 00:27:48.176 "params": { 00:27:48.176 "admin_cmd_passthru": { 00:27:48.176 "identify_ctrlr": true 00:27:48.176 } 00:27:48.176 } 00:27:48.176 } 00:27:48.176 00:27:48.176 INFO: response: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "id": 1, 00:27:48.176 "result": true 00:27:48.176 } 00:27:48.176 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.176 17:36:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 INFO: Setting log level to 20 00:27:48.176 INFO: Setting log level to 20 00:27:48.176 INFO: Log level set to 20 00:27:48.176 INFO: Log level set to 20 00:27:48.176 INFO: Requests: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "method": "framework_start_init", 00:27:48.176 "id": 1 00:27:48.176 } 00:27:48.176 00:27:48.176 INFO: Requests: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "method": "framework_start_init", 00:27:48.176 "id": 1 00:27:48.176 } 00:27:48.176 00:27:48.176 [2024-07-12 17:36:06.899281] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:48.176 INFO: response: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "id": 1, 00:27:48.176 "result": true 00:27:48.176 } 00:27:48.176 00:27:48.176 INFO: response: 00:27:48.176 { 00:27:48.176 "jsonrpc": "2.0", 00:27:48.176 "id": 1, 00:27:48.176 "result": true 00:27:48.176 } 00:27:48.176 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.176 17:36:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 INFO: Setting log level to 40 00:27:48.176 INFO: Setting log level to 40 00:27:48.176 INFO: Setting log level to 40 00:27:48.176 [2024-07-12 17:36:06.912797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.176 17:36:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 17:36:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.176 17:36:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.466 Nvme0n1 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.467 [2024-07-12 17:36:09.808335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.467 [ 00:27:51.467 { 00:27:51.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:51.467 "subtype": "Discovery", 00:27:51.467 "listen_addresses": [], 00:27:51.467 "allow_any_host": true, 00:27:51.467 "hosts": [] 00:27:51.467 }, 00:27:51.467 { 00:27:51.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.467 "subtype": "NVMe", 00:27:51.467 "listen_addresses": [ 00:27:51.467 { 00:27:51.467 "trtype": "TCP", 00:27:51.467 "adrfam": "IPv4", 00:27:51.467 "traddr": "10.0.0.2", 00:27:51.467 "trsvcid": "4420" 00:27:51.467 } 00:27:51.467 ], 00:27:51.467 "allow_any_host": true, 00:27:51.467 "hosts": [], 00:27:51.467 "serial_number": "SPDK00000000000001", 00:27:51.467 "model_number": "SPDK bdev Controller", 00:27:51.467 "max_namespaces": 1, 00:27:51.467 "min_cntlid": 1, 00:27:51.467 "max_cntlid": 65519, 00:27:51.467 "namespaces": [ 00:27:51.467 { 00:27:51.467 "nsid": 1, 00:27:51.467 "bdev_name": "Nvme0n1", 00:27:51.467 "name": "Nvme0n1", 00:27:51.467 "nguid": "DC12885B608D418C924E295171C4848A", 00:27:51.467 "uuid": "dc12885b-608d-418c-924e-295171c4848a" 00:27:51.467 } 00:27:51.467 ] 00:27:51.467 } 00:27:51.467 ] 00:27:51.467 17:36:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:51.467 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:51.467 17:36:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:51.467 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:51.467 17:36:10 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:51.467 rmmod nvme_tcp 00:27:51.467 rmmod nvme_fabrics 00:27:51.467 rmmod nvme_keyring 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 40437 ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 40437 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 40437 ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 40437 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 40437 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 40437' 00:27:51.467 killing process with pid 40437 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 40437 00:27:51.467 17:36:10 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 40437 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.372 17:36:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.372 17:36:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:53.372 17:36:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.272 17:36:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.272 00:27:55.272 real 0m21.801s 00:27:55.272 user 0m29.814s 00:27:55.272 sys 0m4.817s 00:27:55.272 17:36:13 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.272 17:36:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:55.272 ************************************ 00:27:55.272 END TEST nvmf_identify_passthru 00:27:55.272 ************************************ 00:27:55.272 17:36:13 -- common/autotest_common.sh@1142 -- # return 0 00:27:55.272 17:36:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:55.272 17:36:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:55.272 17:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.272 17:36:13 -- common/autotest_common.sh@10 -- # set +x 00:27:55.272 ************************************ 00:27:55.272 START TEST nvmf_dif 00:27:55.272 ************************************ 00:27:55.272 17:36:13 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:55.272 * Looking for test storage... 00:27:55.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.272 17:36:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.272 17:36:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.272 17:36:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.272 17:36:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.272 17:36:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.272 17:36:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.272 17:36:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:55.272 17:36:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:55.272 17:36:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.272 17:36:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.273 17:36:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.273 17:36:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.273 17:36:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:55.273 17:36:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.273 17:36:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.273 17:36:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.273 17:36:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.273 17:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:00.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:00.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:00.598 Found net devices under 0000:86:00.0: cvl_0_0 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:00.598 Found net devices under 0000:86:00.1: cvl_0_1 00:28:00.598 17:36:19 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:28:00.599 00:28:00.599 --- 10.0.0.2 ping statistics --- 00:28:00.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.599 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:28:00.599 00:28:00.599 --- 10.0.0.1 ping statistics --- 00:28:00.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.599 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:00.599 17:36:19 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.134 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.134 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:03.134 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.134 17:36:21 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.393 17:36:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:03.393 17:36:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:03.393 17:36:21 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:03.393 17:36:21 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=46485 00:28:03.393 17:36:21 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:03.393 17:36:21 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 46485 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 46485 ']' 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.393 17:36:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:03.393 [2024-07-12 17:36:21.996112] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:28:03.393 [2024-07-12 17:36:21.996153] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.393 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.393 [2024-07-12 17:36:22.053555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.393 [2024-07-12 17:36:22.131605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.393 [2024-07-12 17:36:22.131643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.393 [2024-07-12 17:36:22.131650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.393 [2024-07-12 17:36:22.131656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.393 [2024-07-12 17:36:22.131661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.393 [2024-07-12 17:36:22.131679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.331 17:36:22 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.331 17:36:22 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:04.331 17:36:22 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.331 17:36:22 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.331 17:36:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.331 17:36:22 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.331 17:36:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:04.332 17:36:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 [2024-07-12 17:36:22.827506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.332 17:36:22 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 ************************************ 00:28:04.332 START TEST fio_dif_1_default 00:28:04.332 ************************************ 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 bdev_null0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:04.332 [2024-07-12 17:36:22.903799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.332 { 00:28:04.332 "params": { 00:28:04.332 "name": "Nvme$subsystem", 00:28:04.332 "trtype": "$TEST_TRANSPORT", 00:28:04.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.332 "adrfam": "ipv4", 00:28:04.332 "trsvcid": "$NVMF_PORT", 00:28:04.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.332 "hdgst": ${hdgst:-false}, 00:28:04.332 "ddgst": ${ddgst:-false} 00:28:04.332 }, 00:28:04.332 "method": "bdev_nvme_attach_controller" 00:28:04.332 } 00:28:04.332 EOF 00:28:04.332 )") 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:04.332 "params": { 00:28:04.332 "name": "Nvme0", 00:28:04.332 "trtype": "tcp", 00:28:04.332 "traddr": "10.0.0.2", 00:28:04.332 "adrfam": "ipv4", 00:28:04.332 "trsvcid": "4420", 00:28:04.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.332 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.332 "hdgst": false, 00:28:04.332 "ddgst": false 00:28:04.332 }, 00:28:04.332 "method": "bdev_nvme_attach_controller" 00:28:04.332 }' 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:04.332 17:36:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.591 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:04.591 fio-3.35 00:28:04.591 Starting 1 thread 00:28:04.591 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.802 00:28:16.802 filename0: (groupid=0, jobs=1): err= 0: pid=46865: Fri Jul 12 17:36:33 2024 00:28:16.802 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:28:16.802 slat (nsec): min=4208, max=42726, avg=6170.12, stdev=1555.81 00:28:16.802 clat (usec): min=40846, max=44206, avg=41012.66, stdev=244.17 00:28:16.802 lat (usec): min=40853, max=44227, avg=41018.83, stdev=244.28 00:28:16.802 clat percentiles (usec): 00:28:16.802 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:16.802 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:16.802 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:16.802 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:28:16.802 | 99.99th=[44303] 00:28:16.802 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:28:16.802 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:16.802 lat (msec) : 50=100.00% 00:28:16.802 cpu : usr=94.64%, sys=5.10%, ctx=15, majf=0, minf=248 00:28:16.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.802 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:16.802 00:28:16.802 Run status group 0 (all jobs): 00:28:16.802 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 00:28:16.803 real 0m11.091s 00:28:16.803 user 0m15.592s 00:28:16.803 sys 0m0.833s 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.803 17:36:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 ************************************ 00:28:16.803 END TEST fio_dif_1_default 00:28:16.803 ************************************ 00:28:16.803 17:36:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:16.803 17:36:33 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:16.803 17:36:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:16.803 17:36:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.803 17:36:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 ************************************ 00:28:16.803 START TEST fio_dif_1_multi_subsystems 00:28:16.803 ************************************ 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 bdev_null0 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 [2024-07-12 17:36:34.054905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 bdev_null1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.803 { 00:28:16.803 "params": { 00:28:16.803 "name": "Nvme$subsystem", 00:28:16.803 "trtype": "$TEST_TRANSPORT", 00:28:16.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.803 "adrfam": "ipv4", 00:28:16.803 "trsvcid": "$NVMF_PORT", 00:28:16.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.803 "hdgst": ${hdgst:-false}, 00:28:16.803 "ddgst": ${ddgst:-false} 00:28:16.803 }, 00:28:16.803 "method": "bdev_nvme_attach_controller" 00:28:16.803 } 00:28:16.803 EOF 00:28:16.803 )") 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.803 { 00:28:16.803 "params": { 00:28:16.803 "name": "Nvme$subsystem", 00:28:16.803 "trtype": "$TEST_TRANSPORT", 00:28:16.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.803 "adrfam": "ipv4", 00:28:16.803 "trsvcid": "$NVMF_PORT", 00:28:16.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.803 "hdgst": ${hdgst:-false}, 00:28:16.803 "ddgst": ${ddgst:-false} 00:28:16.803 }, 00:28:16.803 "method": "bdev_nvme_attach_controller" 00:28:16.803 } 00:28:16.803 EOF 00:28:16.803 )") 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:16.803 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:16.803 "params": { 00:28:16.803 "name": "Nvme0", 00:28:16.803 "trtype": "tcp", 00:28:16.803 "traddr": "10.0.0.2", 00:28:16.803 "adrfam": "ipv4", 00:28:16.803 "trsvcid": "4420", 00:28:16.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.803 "hdgst": false, 00:28:16.803 "ddgst": false 00:28:16.803 }, 00:28:16.803 "method": "bdev_nvme_attach_controller" 00:28:16.803 },{ 00:28:16.804 "params": { 00:28:16.804 "name": "Nvme1", 00:28:16.804 "trtype": "tcp", 00:28:16.804 "traddr": "10.0.0.2", 00:28:16.804 "adrfam": "ipv4", 00:28:16.804 "trsvcid": "4420", 00:28:16.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:16.804 "hdgst": false, 00:28:16.804 "ddgst": false 00:28:16.804 }, 00:28:16.804 "method": "bdev_nvme_attach_controller" 00:28:16.804 }' 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:16.804 17:36:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.804 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:16.804 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:16.804 fio-3.35 00:28:16.804 Starting 2 threads 00:28:16.804 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.812 00:28:26.812 filename0: (groupid=0, jobs=1): err= 0: pid=48826: Fri Jul 12 17:36:44 2024 00:28:26.812 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:28:26.812 slat (nsec): min=2929, max=22712, avg=7652.18, stdev=2556.28 00:28:26.812 clat (usec): min=40826, max=45945, avg=41009.55, stdev=338.38 00:28:26.812 lat (usec): min=40837, max=45955, avg=41017.21, stdev=338.29 00:28:26.812 clat percentiles (usec): 00:28:26.812 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:26.812 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:26.812 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:26.812 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:28:26.812 | 99.99th=[45876] 00:28:26.812 bw ( KiB/s): min= 383, max= 416, per=33.84%, avg=388.75, stdev=11.75, samples=20 00:28:26.812 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:28:26.812 lat (msec) : 50=100.00% 00:28:26.812 cpu : usr=97.77%, sys=1.98%, ctx=10, majf=0, minf=100 00:28:26.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.812 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.812 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:26.812 filename1: (groupid=0, jobs=1): err= 0: pid=48827: Fri Jul 12 17:36:44 2024 00:28:26.812 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10020msec) 00:28:26.812 slat (nsec): min=4319, max=23876, avg=6969.45, stdev=1963.75 00:28:26.812 clat (usec): min=399, max=45153, avg=21119.32, stdev=20417.56 00:28:26.812 lat (usec): min=405, max=45166, avg=21126.29, stdev=20416.89 00:28:26.812 clat percentiles (usec): 00:28:26.812 | 1.00th=[ 420], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 502], 00:28:26.812 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[40633], 60.00th=[41157], 00:28:26.812 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:28:26.812 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:28:26.812 | 99.99th=[45351] 00:28:26.812 bw ( KiB/s): min= 672, max= 768, per=65.94%, avg=756.80, stdev=28.00, samples=20 00:28:26.812 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:28:26.812 lat (usec) : 500=19.15%, 750=29.54%, 1000=0.90% 00:28:26.812 lat (msec) : 50=50.42% 00:28:26.812 cpu : usr=97.23%, sys=2.53%, ctx=13, majf=0, minf=165 00:28:26.812 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.812 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.812 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:26.812 00:28:26.812 Run status group 0 (all jobs): 00:28:26.812 READ: bw=1147KiB/s (1174kB/s), 390KiB/s-757KiB/s (399kB/s-775kB/s), io=11.2MiB (11.8MB), run=10012-10020msec 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 00:28:26.812 real 0m11.159s 00:28:26.812 user 0m26.273s 00:28:26.812 sys 0m0.707s 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 ************************************ 00:28:26.812 END TEST fio_dif_1_multi_subsystems 00:28:26.812 ************************************ 00:28:26.812 17:36:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:26.812 17:36:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:26.812 17:36:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.812 17:36:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 ************************************ 00:28:26.812 START TEST fio_dif_rand_params 00:28:26.812 ************************************ 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 bdev_null0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:26.812 [2024-07-12 17:36:45.289346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.812 { 00:28:26.812 "params": { 00:28:26.812 "name": "Nvme$subsystem", 00:28:26.812 "trtype": "$TEST_TRANSPORT", 00:28:26.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.812 "adrfam": "ipv4", 00:28:26.812 "trsvcid": "$NVMF_PORT", 00:28:26.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.812 "hdgst": ${hdgst:-false}, 00:28:26.812 "ddgst": ${ddgst:-false} 00:28:26.812 }, 00:28:26.812 "method": "bdev_nvme_attach_controller" 00:28:26.812 } 00:28:26.812 EOF 00:28:26.812 )") 00:28:26.812 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.813 "params": { 00:28:26.813 "name": "Nvme0", 00:28:26.813 "trtype": "tcp", 00:28:26.813 "traddr": "10.0.0.2", 00:28:26.813 "adrfam": "ipv4", 00:28:26.813 "trsvcid": "4420", 00:28:26.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.813 "hdgst": false, 00:28:26.813 "ddgst": false 00:28:26.813 }, 00:28:26.813 "method": "bdev_nvme_attach_controller" 00:28:26.813 }' 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.813 17:36:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:27.078 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:27.078 ... 00:28:27.078 fio-3.35 00:28:27.078 Starting 3 threads 00:28:27.078 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.625 00:28:33.625 filename0: (groupid=0, jobs=1): err= 0: pid=50646: Fri Jul 12 17:36:51 2024 00:28:33.625 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(204MiB/5045msec) 00:28:33.625 slat (nsec): min=6367, max=52606, avg=19303.11, stdev=9843.72 00:28:33.625 clat (usec): min=3393, max=51198, avg=9208.83, stdev=9517.64 00:28:33.625 lat (usec): min=3401, max=51230, avg=9228.14, stdev=9517.91 00:28:33.625 clat percentiles (usec): 00:28:33.625 | 1.00th=[ 3818], 5.00th=[ 3982], 10.00th=[ 4293], 20.00th=[ 5276], 00:28:33.625 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7242], 00:28:33.625 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[45876], 00:28:33.625 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:28:33.625 | 99.99th=[51119] 00:28:33.625 bw ( KiB/s): min=20736, max=47616, per=38.67%, avg=41804.80, stdev=8222.21, samples=10 00:28:33.625 iops : min= 162, max= 372, avg=326.60, stdev=64.24, samples=10 00:28:33.625 lat (msec) : 4=5.32%, 10=79.94%, 20=9.30%, 50=4.95%, 100=0.49% 00:28:33.625 cpu : usr=95.30%, sys=4.38%, ctx=10, majf=0, minf=159 00:28:33.625 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.625 filename0: (groupid=0, jobs=1): err= 0: pid=50647: Fri Jul 12 17:36:51 2024 00:28:33.625 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(163MiB/5007msec) 00:28:33.625 slat (nsec): min=6305, max=39607, avg=15672.45, stdev=8384.54 00:28:33.625 clat (usec): min=3414, max=89573, avg=11534.31, stdev=13019.36 00:28:33.625 lat (usec): min=3422, max=89596, avg=11549.98, stdev=13019.55 00:28:33.625 clat percentiles (usec): 00:28:33.625 | 1.00th=[ 3752], 5.00th=[ 4047], 10.00th=[ 4555], 20.00th=[ 5932], 00:28:33.625 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 8225], 00:28:33.625 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[45351], 95.00th=[49021], 00:28:33.625 | 99.00th=[50594], 99.50th=[51119], 99.90th=[89654], 99.95th=[89654], 00:28:33.625 | 99.99th=[89654] 00:28:33.625 bw ( KiB/s): min=24320, max=42496, per=30.71%, avg=33203.20, stdev=5639.82, samples=10 00:28:33.625 iops : min= 190, max= 332, avg=259.40, stdev=44.06, samples=10 00:28:33.625 lat (msec) : 4=4.15%, 10=81.62%, 20=4.00%, 50=7.92%, 100=2.31% 00:28:33.625 cpu : usr=96.72%, sys=2.94%, ctx=15, majf=0, minf=43 00:28:33.625 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 issued rwts: total=1300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.625 filename0: (groupid=0, jobs=1): err= 0: pid=50648: Fri Jul 12 17:36:51 2024 00:28:33.625 read: IOPS=262, BW=32.9MiB/s (34.5MB/s)(166MiB/5044msec) 00:28:33.625 slat (usec): min=6, max=139, avg=14.90, stdev= 8.99 00:28:33.625 clat (usec): min=3555, max=52844, avg=11329.48, stdev=12796.78 00:28:33.625 lat (usec): min=3562, max=52867, avg=11344.39, stdev=12796.95 00:28:33.625 clat percentiles (usec): 00:28:33.625 | 1.00th=[ 3752], 5.00th=[ 3949], 10.00th=[ 4146], 20.00th=[ 5407], 00:28:33.625 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 8455], 00:28:33.625 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[45351], 95.00th=[49546], 00:28:33.625 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:28:33.625 | 99.99th=[52691] 00:28:33.625 bw ( KiB/s): min=26368, max=41216, per=31.38%, avg=33928.00, stdev=4837.18, samples=10 00:28:33.625 iops : min= 206, max= 322, avg=265.00, stdev=37.70, samples=10 00:28:33.625 lat (msec) : 4=6.26%, 10=75.87%, 20=7.84%, 50=5.96%, 100=4.07% 00:28:33.625 cpu : usr=93.42%, sys=4.46%, ctx=361, majf=0, minf=180 00:28:33.625 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.625 issued rwts: total=1326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:33.625 00:28:33.625 Run status group 0 (all jobs): 00:28:33.625 READ: bw=106MiB/s (111MB/s), 32.5MiB/s-40.5MiB/s (34.0MB/s-42.5MB/s), io=533MiB (558MB), run=5007-5045msec 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 bdev_null0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 [2024-07-12 17:36:51.395360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 bdev_null1 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.625 bdev_null2 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.625 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.626 { 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme$subsystem", 00:28:33.626 "trtype": "$TEST_TRANSPORT", 00:28:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "$NVMF_PORT", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.626 "hdgst": ${hdgst:-false}, 00:28:33.626 "ddgst": ${ddgst:-false} 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 } 00:28:33.626 EOF 00:28:33.626 )") 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.626 { 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme$subsystem", 00:28:33.626 "trtype": "$TEST_TRANSPORT", 00:28:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "$NVMF_PORT", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.626 "hdgst": ${hdgst:-false}, 00:28:33.626 "ddgst": ${ddgst:-false} 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 } 00:28:33.626 EOF 00:28:33.626 )") 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.626 { 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme$subsystem", 00:28:33.626 "trtype": "$TEST_TRANSPORT", 00:28:33.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "$NVMF_PORT", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.626 "hdgst": ${hdgst:-false}, 00:28:33.626 "ddgst": ${ddgst:-false} 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 } 00:28:33.626 EOF 00:28:33.626 )") 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme0", 00:28:33.626 "trtype": "tcp", 00:28:33.626 "traddr": "10.0.0.2", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "4420", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.626 "hdgst": false, 00:28:33.626 "ddgst": false 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 },{ 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme1", 00:28:33.626 "trtype": "tcp", 00:28:33.626 "traddr": "10.0.0.2", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "4420", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.626 "hdgst": false, 00:28:33.626 "ddgst": false 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 },{ 00:28:33.626 "params": { 00:28:33.626 "name": "Nvme2", 00:28:33.626 "trtype": "tcp", 00:28:33.626 "traddr": "10.0.0.2", 00:28:33.626 "adrfam": "ipv4", 00:28:33.626 "trsvcid": "4420", 00:28:33.626 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:33.626 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:33.626 "hdgst": false, 00:28:33.626 "ddgst": false 00:28:33.626 }, 00:28:33.626 "method": "bdev_nvme_attach_controller" 00:28:33.626 }' 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:33.626 17:36:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.626 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:33.626 ... 00:28:33.626 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:33.626 ... 00:28:33.626 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:33.626 ... 00:28:33.626 fio-3.35 00:28:33.626 Starting 24 threads 00:28:33.626 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.869 00:28:45.869 filename0: (groupid=0, jobs=1): err= 0: pid=51841: Fri Jul 12 17:37:02 2024 00:28:45.869 read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.93MiB/10112msec) 00:28:45.869 slat (nsec): min=6372, max=38339, avg=11468.34, stdev=4739.17 00:28:45.869 clat (usec): min=1166, max=422900, avg=63449.11, stdev=105313.42 00:28:45.869 lat (usec): min=1189, max=422909, avg=63460.58, stdev=105312.76 00:28:45.869 clat percentiles (usec): 00:28:45.869 | 1.00th=[ 1401], 5.00th=[ 3720], 10.00th=[ 23462], 20.00th=[ 24773], 00:28:45.869 | 30.00th=[ 25297], 40.00th=[ 25560], 50.00th=[ 25560], 60.00th=[ 25822], 00:28:45.869 | 70.00th=[ 26346], 80.00th=[ 26870], 90.00th=[325059], 95.00th=[350225], 00:28:45.869 | 99.00th=[367002], 99.50th=[383779], 99.90th=[383779], 99.95th=[421528], 00:28:45.869 | 99.99th=[421528] 00:28:45.869 bw ( KiB/s): min= 128, max= 3592, per=4.51%, avg=1010.80, stdev=1192.43, samples=20 00:28:45.869 iops : min= 32, max= 898, avg=252.70, stdev=298.11, samples=20 00:28:45.869 lat (msec) : 2=1.93%, 4=3.42%, 10=0.63%, 20=0.98%, 50=80.53% 00:28:45.869 lat (msec) : 250=0.55%, 500=11.95% 00:28:45.869 cpu : usr=99.01%, sys=0.63%, ctx=10, majf=0, minf=38 00:28:45.869 IO depths : 1=5.2%, 2=11.0%, 4=23.5%, 8=52.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:28:45.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.869 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.869 issued rwts: total=2543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.869 filename0: (groupid=0, jobs=1): err= 0: pid=51842: Fri Jul 12 17:37:02 2024 00:28:45.869 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10085msec) 00:28:45.869 slat (nsec): min=6379, max=62063, avg=18089.24, stdev=6885.66 00:28:45.869 clat (msec): min=22, max=450, avg=67.98, stdev=107.88 00:28:45.869 lat (msec): min=22, max=450, avg=68.00, stdev=107.88 00:28:45.869 clat percentiles (msec): 00:28:45.869 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.869 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.869 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.869 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 451], 00:28:45.869 | 99.99th=[ 451] 00:28:45.869 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1074.19, samples=20 00:28:45.869 iops : min= 32, max= 640, avg=235.20, stdev=268.55, samples=20 00:28:45.869 lat (msec) : 50=85.81%, 100=0.76%, 250=0.59%, 500=12.84% 00:28:45.869 cpu : usr=99.11%, sys=0.51%, ctx=12, majf=0, minf=41 00:28:45.869 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:45.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.869 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.869 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.869 filename0: (groupid=0, jobs=1): err= 0: pid=51843: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10084msec) 00:28:45.870 slat (nsec): min=6461, max=78905, avg=30016.77, stdev=15802.52 00:28:45.870 clat (msec): min=15, max=462, avg=67.88, stdev=108.12 00:28:45.870 lat (msec): min=15, max=462, avg=67.91, stdev=108.11 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.870 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 464], 00:28:45.870 | 99.99th=[ 464] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1074.36, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=235.20, stdev=268.59, samples=20 00:28:45.870 lat (msec) : 20=0.93%, 50=85.56%, 250=0.76%, 500=12.75% 00:28:45.870 cpu : usr=98.73%, sys=0.81%, ctx=28, majf=0, minf=34 00:28:45.870 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename0: (groupid=0, jobs=1): err= 0: pid=51844: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=233, BW=934KiB/s (956kB/s)(9400KiB/10068msec) 00:28:45.870 slat (usec): min=6, max=184, avg=21.94, stdev=24.49 00:28:45.870 clat (msec): min=13, max=463, avg=68.30, stdev=108.78 00:28:45.870 lat (msec): min=13, max=463, avg=68.33, stdev=108.78 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 26], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.870 | 99.00th=[ 368], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 464], 00:28:45.870 | 99.99th=[ 464] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=933.60, stdev=1070.39, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=233.40, stdev=267.60, samples=20 00:28:45.870 lat (msec) : 20=1.45%, 50=85.02%, 250=0.60%, 500=12.94% 00:28:45.870 cpu : usr=99.19%, sys=0.42%, ctx=35, majf=0, minf=22 00:28:45.870 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename0: (groupid=0, jobs=1): err= 0: pid=51845: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=231, BW=928KiB/s (950kB/s)(9336KiB/10064msec) 00:28:45.870 slat (nsec): min=6409, max=67451, avg=28915.20, stdev=15059.13 00:28:45.870 clat (msec): min=23, max=658, avg=68.72, stdev=114.30 00:28:45.870 lat (msec): min=23, max=658, avg=68.75, stdev=114.29 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 338], 95.00th=[ 355], 00:28:45.870 | 99.00th=[ 489], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 659], 00:28:45.870 | 99.99th=[ 659] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.14%, avg=927.40, stdev=1070.47, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=231.85, stdev=267.62, samples=20 00:28:45.870 lat (msec) : 50=86.38%, 100=0.69%, 250=0.60%, 500=11.57%, 750=0.77% 00:28:45.870 cpu : usr=97.99%, sys=1.18%, ctx=99, majf=0, minf=35 00:28:45.870 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename0: (groupid=0, jobs=1): err= 0: pid=51846: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=233, BW=934KiB/s (956kB/s)(9408KiB/10072msec) 00:28:45.870 slat (nsec): min=6255, max=74795, avg=32319.20, stdev=16397.69 00:28:45.870 clat (msec): min=23, max=469, avg=68.23, stdev=108.42 00:28:45.870 lat (msec): min=23, max=469, avg=68.26, stdev=108.41 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.870 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 468], 00:28:45.870 | 99.99th=[ 468] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=934.40, stdev=1078.94, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=233.60, stdev=269.73, samples=20 00:28:45.870 lat (msec) : 50=86.39%, 250=0.77%, 500=12.84% 00:28:45.870 cpu : usr=99.02%, sys=0.63%, ctx=9, majf=0, minf=37 00:28:45.870 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename0: (groupid=0, jobs=1): err= 0: pid=51847: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10084msec) 00:28:45.870 slat (nsec): min=6416, max=74852, avg=26391.99, stdev=14353.61 00:28:45.870 clat (msec): min=14, max=459, avg=67.91, stdev=108.09 00:28:45.870 lat (msec): min=14, max=459, avg=67.94, stdev=108.08 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.870 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 460], 00:28:45.870 | 99.99th=[ 460] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1074.01, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=235.20, stdev=268.50, samples=20 00:28:45.870 lat (msec) : 20=0.68%, 50=85.81%, 250=0.76%, 500=12.75% 00:28:45.870 cpu : usr=98.99%, sys=0.60%, ctx=60, majf=0, minf=34 00:28:45.870 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename0: (groupid=0, jobs=1): err= 0: pid=51848: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=234, BW=938KiB/s (960kB/s)(9456KiB/10084msec) 00:28:45.870 slat (nsec): min=6270, max=91379, avg=29436.15, stdev=17030.75 00:28:45.870 clat (msec): min=17, max=526, avg=67.83, stdev=109.28 00:28:45.870 lat (msec): min=17, max=526, avg=67.86, stdev=109.27 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 330], 95.00th=[ 355], 00:28:45.870 | 99.00th=[ 384], 99.50th=[ 464], 99.90th=[ 527], 99.95th=[ 527], 00:28:45.870 | 99.99th=[ 527] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.19%, avg=939.20, stdev=1074.79, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=234.80, stdev=268.70, samples=20 00:28:45.870 lat (msec) : 20=0.68%, 50=85.96%, 250=0.93%, 500=12.27%, 750=0.17% 00:28:45.870 cpu : usr=99.09%, sys=0.51%, ctx=25, majf=0, minf=34 00:28:45.870 IO depths : 1=5.6%, 2=11.4%, 4=23.6%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename1: (groupid=0, jobs=1): err= 0: pid=51849: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=230, BW=923KiB/s (945kB/s)(9328KiB/10105msec) 00:28:45.870 slat (usec): min=6, max=146, avg=34.35, stdev=19.07 00:28:45.870 clat (msec): min=14, max=668, avg=68.87, stdev=120.35 00:28:45.870 lat (msec): min=14, max=668, avg=68.90, stdev=120.35 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 342], 95.00th=[ 359], 00:28:45.870 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 535], 99.95th=[ 667], 00:28:45.870 | 99.99th=[ 667] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.14%, avg=926.40, stdev=1084.31, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=231.60, stdev=271.08, samples=20 00:28:45.870 lat (msec) : 20=0.77%, 50=87.05%, 250=0.94%, 500=9.69%, 750=1.54% 00:28:45.870 cpu : usr=98.85%, sys=0.71%, ctx=51, majf=0, minf=33 00:28:45.870 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename1: (groupid=0, jobs=1): err= 0: pid=51850: Fri Jul 12 17:37:02 2024 00:28:45.870 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10084msec) 00:28:45.870 slat (nsec): min=6429, max=76419, avg=28661.80, stdev=15516.31 00:28:45.870 clat (msec): min=17, max=480, avg=67.89, stdev=108.10 00:28:45.870 lat (msec): min=17, max=480, avg=67.92, stdev=108.09 00:28:45.870 clat percentiles (msec): 00:28:45.870 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.870 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.870 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.870 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 481], 00:28:45.870 | 99.99th=[ 481] 00:28:45.870 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1074.10, samples=20 00:28:45.870 iops : min= 32, max= 640, avg=235.20, stdev=268.52, samples=20 00:28:45.870 lat (msec) : 20=0.68%, 50=85.81%, 250=0.68%, 500=12.84% 00:28:45.870 cpu : usr=98.62%, sys=0.81%, ctx=47, majf=0, minf=27 00:28:45.870 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.870 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.870 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.870 filename1: (groupid=0, jobs=1): err= 0: pid=51851: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=245, BW=982KiB/s (1005kB/s)(9920KiB/10106msec) 00:28:45.871 slat (usec): min=6, max=159, avg=18.68, stdev=11.07 00:28:45.871 clat (msec): min=2, max=382, avg=65.04, stdev=106.02 00:28:45.871 lat (msec): min=2, max=382, avg=65.05, stdev=106.02 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 24], 20.00th=[ 25], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 326], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:28:45.871 | 99.99th=[ 384] 00:28:45.871 bw ( KiB/s): min= 128, max= 3200, per=4.40%, avg=985.60, stdev=1145.70, samples=20 00:28:45.871 iops : min= 32, max= 800, avg=246.40, stdev=286.42, samples=20 00:28:45.871 lat (msec) : 4=2.86%, 10=0.65%, 20=1.01%, 50=82.58%, 250=0.65% 00:28:45.871 lat (msec) : 500=12.26% 00:28:45.871 cpu : usr=98.61%, sys=0.81%, ctx=65, majf=0, minf=50 00:28:45.871 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename1: (groupid=0, jobs=1): err= 0: pid=51852: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=230, BW=921KiB/s (943kB/s)(9272KiB/10071msec) 00:28:45.871 slat (usec): min=6, max=142, avg=15.29, stdev=15.41 00:28:45.871 clat (msec): min=14, max=645, avg=69.39, stdev=119.22 00:28:45.871 lat (msec): min=14, max=645, avg=69.40, stdev=119.22 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 342], 95.00th=[ 363], 00:28:45.871 | 99.00th=[ 493], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 642], 00:28:45.871 | 99.99th=[ 642] 00:28:45.871 bw ( KiB/s): min= 112, max= 2560, per=4.11%, avg=921.00, stdev=1080.68, samples=20 00:28:45.871 iops : min= 28, max= 640, avg=230.25, stdev=270.17, samples=20 00:28:45.871 lat (msec) : 20=0.17%, 50=86.80%, 100=0.69%, 250=0.60%, 500=10.96% 00:28:45.871 lat (msec) : 750=0.78% 00:28:45.871 cpu : usr=98.95%, sys=0.69%, ctx=17, majf=0, minf=38 00:28:45.871 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename1: (groupid=0, jobs=1): err= 0: pid=51853: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=233, BW=934KiB/s (956kB/s)(9400KiB/10065msec) 00:28:45.871 slat (nsec): min=6316, max=70679, avg=17231.21, stdev=9404.92 00:28:45.871 clat (msec): min=13, max=447, avg=68.33, stdev=108.27 00:28:45.871 lat (msec): min=13, max=447, avg=68.35, stdev=108.26 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 447], 00:28:45.871 | 99.99th=[ 447] 00:28:45.871 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=933.60, stdev=1070.39, samples=20 00:28:45.871 iops : min= 32, max= 640, avg=233.40, stdev=267.60, samples=20 00:28:45.871 lat (msec) : 20=1.19%, 50=85.28%, 250=0.60%, 500=12.94% 00:28:45.871 cpu : usr=99.10%, sys=0.54%, ctx=16, majf=0, minf=31 00:28:45.871 IO depths : 1=4.3%, 2=10.6%, 4=24.9%, 8=52.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename1: (groupid=0, jobs=1): err= 0: pid=51854: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=234, BW=939KiB/s (961kB/s)(9464KiB/10084msec) 00:28:45.871 slat (nsec): min=6335, max=64960, avg=19478.45, stdev=8666.92 00:28:45.871 clat (msec): min=22, max=439, avg=67.97, stdev=107.93 00:28:45.871 lat (msec): min=22, max=439, avg=67.99, stdev=107.93 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 439], 00:28:45.871 | 99.99th=[ 439] 00:28:45.871 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.00, stdev=1074.64, samples=20 00:28:45.871 iops : min= 32, max= 640, avg=235.00, stdev=268.66, samples=20 00:28:45.871 lat (msec) : 50=85.88%, 100=0.68%, 250=0.59%, 500=12.85% 00:28:45.871 cpu : usr=98.93%, sys=0.66%, ctx=49, majf=0, minf=37 00:28:45.871 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename1: (groupid=0, jobs=1): err= 0: pid=51855: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=234, BW=936KiB/s (959kB/s)(9440KiB/10084msec) 00:28:45.871 slat (usec): min=6, max=179, avg=35.06, stdev=22.91 00:28:45.871 clat (msec): min=17, max=530, avg=68.03, stdev=111.31 00:28:45.871 lat (msec): min=17, max=530, avg=68.06, stdev=111.30 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 326], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 397], 99.50th=[ 518], 99.90th=[ 531], 99.95th=[ 531], 00:28:45.871 | 99.99th=[ 531] 00:28:45.871 bw ( KiB/s): min= 128, max= 2560, per=4.19%, avg=937.60, stdev=1075.86, samples=20 00:28:45.871 iops : min= 32, max= 640, avg=234.40, stdev=268.97, samples=20 00:28:45.871 lat (msec) : 20=0.68%, 50=86.10%, 250=0.93%, 500=11.69%, 750=0.59% 00:28:45.871 cpu : usr=98.98%, sys=0.59%, ctx=24, majf=0, minf=30 00:28:45.871 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename1: (groupid=0, jobs=1): err= 0: pid=51856: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=222, BW=891KiB/s (912kB/s)(8960KiB/10057msec) 00:28:45.871 slat (usec): min=6, max=161, avg=31.10, stdev=20.26 00:28:45.871 clat (msec): min=20, max=680, avg=71.58, stdev=143.23 00:28:45.871 lat (msec): min=20, max=680, avg=71.61, stdev=143.22 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 51], 95.00th=[ 514], 00:28:45.871 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 684], 00:28:45.871 | 99.99th=[ 684] 00:28:45.871 bw ( KiB/s): min= 112, max= 2560, per=4.18%, avg=936.42, stdev=1110.75, samples=19 00:28:45.871 iops : min= 28, max= 640, avg=234.11, stdev=277.69, samples=19 00:28:45.871 lat (msec) : 50=89.91%, 100=0.71%, 500=2.41%, 750=6.96% 00:28:45.871 cpu : usr=98.91%, sys=0.62%, ctx=42, majf=0, minf=65 00:28:45.871 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename2: (groupid=0, jobs=1): err= 0: pid=51857: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10085msec) 00:28:45.871 slat (nsec): min=6317, max=89957, avg=33434.88, stdev=18379.19 00:28:45.871 clat (msec): min=17, max=494, avg=67.82, stdev=108.21 00:28:45.871 lat (msec): min=17, max=494, avg=67.86, stdev=108.20 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 493], 00:28:45.871 | 99.99th=[ 493] 00:28:45.871 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1073.92, samples=20 00:28:45.871 iops : min= 32, max= 640, avg=235.20, stdev=268.48, samples=20 00:28:45.871 lat (msec) : 20=0.68%, 50=85.81%, 250=0.76%, 500=12.75% 00:28:45.871 cpu : usr=98.68%, sys=0.75%, ctx=35, majf=0, minf=27 00:28:45.871 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.871 filename2: (groupid=0, jobs=1): err= 0: pid=51858: Fri Jul 12 17:37:02 2024 00:28:45.871 read: IOPS=234, BW=939KiB/s (961kB/s)(9464KiB/10084msec) 00:28:45.871 slat (nsec): min=6230, max=79826, avg=29756.33, stdev=17312.05 00:28:45.871 clat (msec): min=22, max=497, avg=67.87, stdev=108.05 00:28:45.871 lat (msec): min=22, max=497, avg=67.90, stdev=108.05 00:28:45.871 clat percentiles (msec): 00:28:45.871 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.871 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.871 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.871 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 498], 00:28:45.871 | 99.99th=[ 498] 00:28:45.871 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.00, stdev=1074.47, samples=20 00:28:45.871 iops : min= 32, max= 640, avg=235.00, stdev=268.62, samples=20 00:28:45.871 lat (msec) : 50=85.88%, 100=0.68%, 250=0.59%, 500=12.85% 00:28:45.871 cpu : usr=99.22%, sys=0.43%, ctx=17, majf=0, minf=33 00:28:45.871 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.871 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.871 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51859: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=222, BW=891KiB/s (912kB/s)(8960KiB/10059msec) 00:28:45.872 slat (nsec): min=6096, max=67471, avg=30507.53, stdev=14043.83 00:28:45.872 clat (msec): min=20, max=686, avg=71.58, stdev=143.72 00:28:45.872 lat (msec): min=20, max=686, avg=71.61, stdev=143.72 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 36], 95.00th=[ 514], 00:28:45.872 | 99.00th=[ 542], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 684], 00:28:45.872 | 99.99th=[ 684] 00:28:45.872 bw ( KiB/s): min= 16, max= 2560, per=3.97%, avg=889.60, stdev=1097.46, samples=20 00:28:45.872 iops : min= 4, max= 640, avg=222.40, stdev=274.37, samples=20 00:28:45.872 lat (msec) : 50=90.00%, 100=0.71%, 500=2.23%, 750=7.05% 00:28:45.872 cpu : usr=98.03%, sys=1.06%, ctx=65, majf=0, minf=30 00:28:45.872 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51860: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=234, BW=939KiB/s (962kB/s)(9472KiB/10084msec) 00:28:45.872 slat (nsec): min=6457, max=74066, avg=24780.71, stdev=13458.33 00:28:45.872 clat (msec): min=13, max=472, avg=67.92, stdev=108.09 00:28:45.872 lat (msec): min=13, max=472, avg=67.95, stdev=108.08 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 334], 95.00th=[ 351], 00:28:45.872 | 99.00th=[ 368], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 472], 00:28:45.872 | 99.99th=[ 472] 00:28:45.872 bw ( KiB/s): min= 128, max= 2560, per=4.20%, avg=940.80, stdev=1074.19, samples=20 00:28:45.872 iops : min= 32, max= 640, avg=235.20, stdev=268.55, samples=20 00:28:45.872 lat (msec) : 20=1.01%, 50=85.47%, 250=0.68%, 500=12.84% 00:28:45.872 cpu : usr=99.12%, sys=0.54%, ctx=12, majf=0, minf=33 00:28:45.872 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51861: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=222, BW=891KiB/s (912kB/s)(8960KiB/10059msec) 00:28:45.872 slat (nsec): min=6451, max=78281, avg=33114.10, stdev=16250.92 00:28:45.872 clat (msec): min=23, max=716, avg=71.54, stdev=141.91 00:28:45.872 lat (msec): min=23, max=716, avg=71.57, stdev=141.90 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 25], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 55], 95.00th=[ 506], 00:28:45.872 | 99.00th=[ 550], 99.50th=[ 667], 99.90th=[ 709], 99.95th=[ 718], 00:28:45.872 | 99.99th=[ 718] 00:28:45.872 bw ( KiB/s): min= 128, max= 2560, per=4.18%, avg=936.42, stdev=1106.91, samples=19 00:28:45.872 iops : min= 32, max= 640, avg=234.11, stdev=276.73, samples=19 00:28:45.872 lat (msec) : 50=89.29%, 100=0.71%, 250=0.80%, 500=2.95%, 750=6.25% 00:28:45.872 cpu : usr=98.17%, sys=1.04%, ctx=315, majf=0, minf=39 00:28:45.872 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51862: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=243, BW=975KiB/s (999kB/s)(9856KiB/10105msec) 00:28:45.872 slat (nsec): min=6263, max=50426, avg=12743.39, stdev=6263.13 00:28:45.872 clat (msec): min=2, max=432, avg=65.48, stdev=106.61 00:28:45.872 lat (msec): min=2, max=432, avg=65.50, stdev=106.61 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 25], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 330], 95.00th=[ 351], 00:28:45.872 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 435], 00:28:45.872 | 99.99th=[ 435] 00:28:45.872 bw ( KiB/s): min= 128, max= 3200, per=4.37%, avg=979.20, stdev=1141.89, samples=20 00:28:45.872 iops : min= 32, max= 800, avg=244.80, stdev=285.47, samples=20 00:28:45.872 lat (msec) : 4=2.60%, 10=0.65%, 20=1.30%, 50=82.47%, 250=0.65% 00:28:45.872 lat (msec) : 500=12.34% 00:28:45.872 cpu : usr=99.16%, sys=0.49%, ctx=9, majf=0, minf=27 00:28:45.872 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51863: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=233, BW=935KiB/s (958kB/s)(9408KiB/10059msec) 00:28:45.872 slat (nsec): min=6426, max=67764, avg=26841.27, stdev=14554.88 00:28:45.872 clat (msec): min=23, max=382, avg=68.21, stdev=107.97 00:28:45.872 lat (msec): min=23, max=382, avg=68.23, stdev=107.96 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 338], 95.00th=[ 351], 00:28:45.872 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:28:45.872 | 99.99th=[ 384] 00:28:45.872 bw ( KiB/s): min= 128, max= 2560, per=4.17%, avg=934.40, stdev=1071.01, samples=20 00:28:45.872 iops : min= 32, max= 640, avg=233.60, stdev=267.75, samples=20 00:28:45.872 lat (msec) : 50=85.71%, 100=0.68%, 250=0.68%, 500=12.93% 00:28:45.872 cpu : usr=98.66%, sys=0.82%, ctx=42, majf=0, minf=33 00:28:45.872 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 filename2: (groupid=0, jobs=1): err= 0: pid=51864: Fri Jul 12 17:37:02 2024 00:28:45.872 read: IOPS=232, BW=930KiB/s (952kB/s)(9352KiB/10058msec) 00:28:45.872 slat (nsec): min=6316, max=67597, avg=28977.59, stdev=15068.70 00:28:45.872 clat (msec): min=23, max=563, avg=68.34, stdev=111.17 00:28:45.872 lat (msec): min=23, max=563, avg=68.37, stdev=111.17 00:28:45.872 clat percentiles (msec): 00:28:45.872 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:28:45.872 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:28:45.872 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 330], 95.00th=[ 355], 00:28:45.872 | 99.00th=[ 426], 99.50th=[ 443], 99.90th=[ 567], 99.95th=[ 567], 00:28:45.872 | 99.99th=[ 567] 00:28:45.872 bw ( KiB/s): min= 128, max= 2560, per=4.16%, avg=932.80, stdev=1071.37, samples=20 00:28:45.872 iops : min= 32, max= 640, avg=233.20, stdev=267.84, samples=20 00:28:45.872 lat (msec) : 50=86.23%, 100=0.68%, 250=0.26%, 500=12.57%, 750=0.26% 00:28:45.872 cpu : usr=98.94%, sys=0.72%, ctx=15, majf=0, minf=37 00:28:45.872 IO depths : 1=5.5%, 2=11.2%, 4=23.4%, 8=52.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:45.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:45.872 issued rwts: total=2338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:45.872 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:45.872 00:28:45.872 Run status group 0 (all jobs): 00:28:45.872 READ: bw=21.9MiB/s (22.9MB/s), 891KiB/s-1006KiB/s (912kB/s-1030kB/s), io=221MiB (232MB), run=10057-10112msec 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:45.872 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 bdev_null0 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 [2024-07-12 17:37:03.081940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 bdev_null1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.873 { 00:28:45.873 "params": { 00:28:45.873 "name": "Nvme$subsystem", 00:28:45.873 "trtype": "$TEST_TRANSPORT", 00:28:45.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.873 "adrfam": "ipv4", 00:28:45.873 "trsvcid": "$NVMF_PORT", 00:28:45.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.873 "hdgst": ${hdgst:-false}, 00:28:45.873 "ddgst": ${ddgst:-false} 00:28:45.873 }, 00:28:45.873 "method": "bdev_nvme_attach_controller" 00:28:45.873 } 00:28:45.873 EOF 00:28:45.873 )") 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.873 { 00:28:45.873 "params": { 00:28:45.873 "name": "Nvme$subsystem", 00:28:45.873 "trtype": "$TEST_TRANSPORT", 00:28:45.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.873 "adrfam": "ipv4", 00:28:45.873 "trsvcid": "$NVMF_PORT", 00:28:45.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.873 "hdgst": ${hdgst:-false}, 00:28:45.873 "ddgst": ${ddgst:-false} 00:28:45.873 }, 00:28:45.873 "method": "bdev_nvme_attach_controller" 00:28:45.873 } 00:28:45.873 EOF 00:28:45.873 )") 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:45.873 "params": { 00:28:45.873 "name": "Nvme0", 00:28:45.873 "trtype": "tcp", 00:28:45.873 "traddr": "10.0.0.2", 00:28:45.873 "adrfam": "ipv4", 00:28:45.873 "trsvcid": "4420", 00:28:45.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:45.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:45.873 "hdgst": false, 00:28:45.873 "ddgst": false 00:28:45.873 }, 00:28:45.873 "method": "bdev_nvme_attach_controller" 00:28:45.873 },{ 00:28:45.873 "params": { 00:28:45.873 "name": "Nvme1", 00:28:45.873 "trtype": "tcp", 00:28:45.873 "traddr": "10.0.0.2", 00:28:45.873 "adrfam": "ipv4", 00:28:45.873 "trsvcid": "4420", 00:28:45.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.873 "hdgst": false, 00:28:45.873 "ddgst": false 00:28:45.873 }, 00:28:45.873 "method": "bdev_nvme_attach_controller" 00:28:45.873 }' 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:45.873 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:45.874 17:37:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:45.874 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:45.874 ... 00:28:45.874 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:45.874 ... 00:28:45.874 fio-3.35 00:28:45.874 Starting 4 threads 00:28:45.874 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.140 00:28:51.140 filename0: (groupid=0, jobs=1): err= 0: pid=53817: Fri Jul 12 17:37:09 2024 00:28:51.140 read: IOPS=2686, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:28:51.140 slat (nsec): min=2937, max=59305, avg=10831.20, stdev=5348.40 00:28:51.140 clat (usec): min=796, max=11317, avg=2946.47, stdev=554.66 00:28:51.140 lat (usec): min=808, max=11328, avg=2957.30, stdev=554.60 00:28:51.140 clat percentiles (usec): 00:28:51.140 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2606], 00:28:51.140 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:51.140 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3490], 95.00th=[ 4080], 00:28:51.140 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5604], 99.95th=[11207], 00:28:51.140 | 99.99th=[11207] 00:28:51.140 bw ( KiB/s): min=20736, max=22336, per=25.65%, avg=21459.56, stdev=509.10, samples=9 00:28:51.140 iops : min= 2592, max= 2792, avg=2682.44, stdev=63.64, samples=9 00:28:51.140 lat (usec) : 1000=0.01% 00:28:51.140 lat (msec) : 2=1.78%, 4=92.73%, 10=5.42%, 20=0.06% 00:28:51.140 cpu : usr=96.48%, sys=3.12%, ctx=42, majf=0, minf=9 00:28:51.140 IO depths : 1=0.3%, 2=3.8%, 4=67.3%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 issued rwts: total=13438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.140 filename0: (groupid=0, jobs=1): err= 0: pid=53818: Fri Jul 12 17:37:09 2024 00:28:51.140 read: IOPS=2640, BW=20.6MiB/s (21.6MB/s)(103MiB/5001msec) 00:28:51.140 slat (nsec): min=4251, max=59320, avg=10529.72, stdev=6011.72 00:28:51.140 clat (usec): min=1060, max=9604, avg=2997.65, stdev=499.04 00:28:51.140 lat (usec): min=1071, max=9619, avg=3008.18, stdev=498.84 00:28:51.140 clat percentiles (usec): 00:28:51.140 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2704], 00:28:51.140 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3032], 00:28:51.140 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3556], 95.00th=[ 3982], 00:28:51.140 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 9372], 00:28:51.140 | 99.99th=[ 9634] 00:28:51.140 bw ( KiB/s): min=20752, max=21584, per=25.25%, avg=21125.33, stdev=292.74, samples=9 00:28:51.140 iops : min= 2594, max= 2698, avg=2640.67, stdev=36.59, samples=9 00:28:51.140 lat (msec) : 2=0.85%, 4=94.33%, 10=4.82% 00:28:51.140 cpu : usr=96.02%, sys=3.62%, ctx=7, majf=0, minf=9 00:28:51.140 IO depths : 1=0.2%, 2=3.9%, 4=67.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 issued rwts: total=13207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.140 filename1: (groupid=0, jobs=1): err= 0: pid=53819: Fri Jul 12 17:37:09 2024 00:28:51.140 read: IOPS=2588, BW=20.2MiB/s (21.2MB/s)(101MiB/5002msec) 00:28:51.140 slat (nsec): min=6126, max=59330, avg=10537.53, stdev=5921.00 00:28:51.140 clat (usec): min=872, max=6939, avg=3058.10, stdev=513.42 00:28:51.140 lat (usec): min=884, max=6965, avg=3068.64, stdev=512.91 00:28:51.140 clat percentiles (usec): 00:28:51.140 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2737], 00:28:51.140 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:28:51.140 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3687], 95.00th=[ 4228], 00:28:51.140 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 6587], 00:28:51.140 | 99.99th=[ 6849] 00:28:51.140 bw ( KiB/s): min=19840, max=21776, per=24.84%, avg=20782.22, stdev=591.86, samples=9 00:28:51.140 iops : min= 2480, max= 2722, avg=2597.78, stdev=73.98, samples=9 00:28:51.140 lat (usec) : 1000=0.01% 00:28:51.140 lat (msec) : 2=0.74%, 4=92.19%, 10=7.06% 00:28:51.140 cpu : usr=96.56%, sys=3.10%, ctx=12, majf=0, minf=0 00:28:51.140 IO depths : 1=0.2%, 2=4.2%, 4=68.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 issued rwts: total=12950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.140 filename1: (groupid=0, jobs=1): err= 0: pid=53820: Fri Jul 12 17:37:09 2024 00:28:51.140 read: IOPS=2540, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5001msec) 00:28:51.140 slat (nsec): min=6105, max=56011, avg=9639.55, stdev=5204.89 00:28:51.140 clat (usec): min=1039, max=6230, avg=3119.78, stdev=525.96 00:28:51.140 lat (usec): min=1046, max=6256, avg=3129.42, stdev=525.71 00:28:51.140 clat percentiles (usec): 00:28:51.140 | 1.00th=[ 2073], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2802], 00:28:51.140 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3064], 00:28:51.140 | 70.00th=[ 3163], 80.00th=[ 3326], 90.00th=[ 3785], 95.00th=[ 4293], 00:28:51.140 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5669], 00:28:51.140 | 99.99th=[ 5800] 00:28:51.140 bw ( KiB/s): min=19296, max=20864, per=24.28%, avg=20309.33, stdev=547.46, samples=9 00:28:51.140 iops : min= 2412, max= 2608, avg=2538.67, stdev=68.43, samples=9 00:28:51.140 lat (msec) : 2=0.64%, 4=91.74%, 10=7.63% 00:28:51.140 cpu : usr=96.14%, sys=3.48%, ctx=8, majf=0, minf=0 00:28:51.140 IO depths : 1=0.2%, 2=2.6%, 4=69.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:51.140 issued rwts: total=12707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:51.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:51.140 00:28:51.140 Run status group 0 (all jobs): 00:28:51.140 READ: bw=81.7MiB/s (85.7MB/s), 19.9MiB/s-21.0MiB/s (20.8MB/s-22.0MB/s), io=409MiB (428MB), run=5001-5002msec 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.140 00:28:51.140 real 0m23.974s 00:28:51.140 user 4m53.774s 00:28:51.140 sys 0m3.884s 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:51.140 17:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:51.140 ************************************ 00:28:51.141 END TEST fio_dif_rand_params 00:28:51.141 ************************************ 00:28:51.141 17:37:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:51.141 17:37:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:51.141 17:37:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:51.141 17:37:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.141 17:37:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:51.141 ************************************ 00:28:51.141 START TEST fio_dif_digest 00:28:51.141 ************************************ 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.141 bdev_null0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.141 [2024-07-12 17:37:09.333774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:51.141 { 00:28:51.141 "params": { 00:28:51.141 "name": "Nvme$subsystem", 00:28:51.141 "trtype": "$TEST_TRANSPORT", 00:28:51.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.141 "adrfam": "ipv4", 00:28:51.141 "trsvcid": "$NVMF_PORT", 00:28:51.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.141 "hdgst": ${hdgst:-false}, 00:28:51.141 "ddgst": ${ddgst:-false} 00:28:51.141 }, 00:28:51.141 "method": "bdev_nvme_attach_controller" 00:28:51.141 } 00:28:51.141 EOF 00:28:51.141 )") 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:51.141 "params": { 00:28:51.141 "name": "Nvme0", 00:28:51.141 "trtype": "tcp", 00:28:51.141 "traddr": "10.0.0.2", 00:28:51.141 "adrfam": "ipv4", 00:28:51.141 "trsvcid": "4420", 00:28:51.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.141 "hdgst": true, 00:28:51.141 "ddgst": true 00:28:51.141 }, 00:28:51.141 "method": "bdev_nvme_attach_controller" 00:28:51.141 }' 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:51.141 17:37:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:51.141 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:51.141 ... 00:28:51.141 fio-3.35 00:28:51.141 Starting 3 threads 00:28:51.141 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.348 00:29:03.348 filename0: (groupid=0, jobs=1): err= 0: pid=54879: Fri Jul 12 17:37:20 2024 00:29:03.348 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(356MiB/10046msec) 00:29:03.348 slat (nsec): min=2983, max=28990, avg=11453.80, stdev=2141.53 00:29:03.348 clat (usec): min=7063, max=50454, avg=10549.07, stdev=1289.11 00:29:03.348 lat (usec): min=7075, max=50467, avg=10560.52, stdev=1289.05 00:29:03.348 clat percentiles (usec): 00:29:03.348 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:29:03.348 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:29:03.348 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:29:03.348 | 99.00th=[12387], 99.50th=[12649], 99.90th=[19792], 99.95th=[47449], 00:29:03.348 | 99.99th=[50594] 00:29:03.348 bw ( KiB/s): min=35001, max=37632, per=34.71%, avg=36438.05, stdev=555.90, samples=20 00:29:03.348 iops : min= 273, max= 294, avg=284.65, stdev= 4.40, samples=20 00:29:03.348 lat (msec) : 10=22.64%, 20=77.29%, 50=0.04%, 100=0.04% 00:29:03.348 cpu : usr=94.55%, sys=5.14%, ctx=32, majf=0, minf=178 00:29:03.348 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:03.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.348 issued rwts: total=2849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:03.348 filename0: (groupid=0, jobs=1): err= 0: pid=54880: Fri Jul 12 17:37:20 2024 00:29:03.348 read: IOPS=265, BW=33.1MiB/s (34.8MB/s)(333MiB/10044msec) 00:29:03.348 slat (nsec): min=6504, max=28913, avg=11524.11, stdev=2275.27 00:29:03.348 clat (usec): min=7569, max=47789, avg=11286.04, stdev=1272.01 00:29:03.348 lat (usec): min=7583, max=47801, avg=11297.57, stdev=1272.04 00:29:03.348 clat percentiles (usec): 00:29:03.348 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:29:03.348 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:29:03.348 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:29:03.348 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14746], 99.95th=[45876], 00:29:03.348 | 99.99th=[47973] 00:29:03.348 bw ( KiB/s): min=32768, max=35328, per=32.45%, avg=34064.15, stdev=622.83, samples=20 00:29:03.348 iops : min= 256, max= 276, avg=266.10, stdev= 4.88, samples=20 00:29:03.348 lat (msec) : 10=5.52%, 20=94.40%, 50=0.08% 00:29:03.348 cpu : usr=94.42%, sys=5.27%, ctx=27, majf=0, minf=101 00:29:03.348 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:03.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.349 issued rwts: total=2663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.349 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:03.349 filename0: (groupid=0, jobs=1): err= 0: pid=54881: Fri Jul 12 17:37:20 2024 00:29:03.349 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10044msec) 00:29:03.349 slat (nsec): min=6499, max=37172, avg=11620.35, stdev=2241.50 00:29:03.349 clat (usec): min=6787, max=47314, avg=11024.64, stdev=1258.18 00:29:03.349 lat (usec): min=6802, max=47325, avg=11036.26, stdev=1258.19 00:29:03.349 clat percentiles (usec): 00:29:03.349 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:29:03.349 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:29:03.349 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:29:03.349 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14746], 99.95th=[46400], 00:29:03.349 | 99.99th=[47449] 00:29:03.349 bw ( KiB/s): min=33792, max=35584, per=33.22%, avg=34867.20, stdev=516.03, samples=20 00:29:03.349 iops : min= 264, max= 278, avg=272.40, stdev= 4.03, samples=20 00:29:03.349 lat (msec) : 10=9.72%, 20=90.21%, 50=0.07% 00:29:03.349 cpu : usr=94.34%, sys=5.35%, ctx=29, majf=0, minf=118 00:29:03.349 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:03.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:03.349 issued rwts: total=2726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:03.349 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:03.349 00:29:03.349 Run status group 0 (all jobs): 00:29:03.349 READ: bw=103MiB/s (107MB/s), 33.1MiB/s-35.4MiB/s (34.8MB/s-37.2MB/s), io=1030MiB (1080MB), run=10044-10046msec 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.349 00:29:03.349 real 0m11.048s 00:29:03.349 user 0m34.987s 00:29:03.349 sys 0m1.849s 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:03.349 17:37:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:03.349 ************************************ 00:29:03.349 END TEST fio_dif_digest 00:29:03.349 ************************************ 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:03.349 17:37:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:03.349 17:37:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:03.349 rmmod nvme_tcp 00:29:03.349 rmmod nvme_fabrics 00:29:03.349 rmmod nvme_keyring 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 46485 ']' 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 46485 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 46485 ']' 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 46485 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 46485 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46485' 00:29:03.349 killing process with pid 46485 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@967 -- # kill 46485 00:29:03.349 17:37:20 nvmf_dif -- common/autotest_common.sh@972 -- # wait 46485 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:03.349 17:37:20 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:04.286 Waiting for block devices as requested 00:29:04.545 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:04.545 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:04.545 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:04.803 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:04.803 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:04.803 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:04.803 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:05.063 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:05.063 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:05.063 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:05.322 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:05.322 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:05.322 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:05.322 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:05.581 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:05.581 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:05.581 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:05.840 17:37:24 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:05.840 17:37:24 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:05.840 17:37:24 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:05.840 17:37:24 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:05.841 17:37:24 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.841 17:37:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:05.841 17:37:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.748 17:37:26 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.748 00:29:07.748 real 1m12.591s 00:29:07.748 user 7m10.364s 00:29:07.748 sys 0m17.612s 00:29:07.748 17:37:26 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:07.748 17:37:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:07.748 ************************************ 00:29:07.748 END TEST nvmf_dif 00:29:07.748 ************************************ 00:29:07.748 17:37:26 -- common/autotest_common.sh@1142 -- # return 0 00:29:07.748 17:37:26 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:07.748 17:37:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.748 17:37:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.748 17:37:26 -- common/autotest_common.sh@10 -- # set +x 00:29:07.748 ************************************ 00:29:07.748 START TEST nvmf_abort_qd_sizes 00:29:07.748 ************************************ 00:29:07.748 17:37:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:08.007 * Looking for test storage... 00:29:08.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:08.007 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:08.008 17:37:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:13.283 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.283 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:13.283 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:13.283 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:13.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:13.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:13.284 Found net devices under 0000:86:00.0: cvl_0_0 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:13.284 Found net devices under 0000:86:00.1: cvl_0_1 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.284 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:13.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:13.285 00:29:13.285 --- 10.0.0.2 ping statistics --- 00:29:13.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.285 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:29:13.285 00:29:13.285 --- 10.0.0.1 ping statistics --- 00:29:13.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.285 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:13.285 17:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:15.827 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:15.827 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:15.827 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:15.827 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:16.086 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:17.024 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=62662 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 62662 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 62662 ']' 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.024 17:37:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:17.024 [2024-07-12 17:37:35.752526] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:29:17.024 [2024-07-12 17:37:35.752569] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.024 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.338 [2024-07-12 17:37:35.808728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.338 [2024-07-12 17:37:35.890578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.338 [2024-07-12 17:37:35.890616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.338 [2024-07-12 17:37:35.890623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.338 [2024-07-12 17:37:35.890629] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.338 [2024-07-12 17:37:35.890634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.338 [2024-07-12 17:37:35.890688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.338 [2024-07-12 17:37:35.890708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.338 [2024-07-12 17:37:35.890818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.338 [2024-07-12 17:37:35.890820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.907 17:37:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:17.907 ************************************ 00:29:17.907 START TEST spdk_target_abort 00:29:17.907 ************************************ 00:29:17.907 17:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:17.907 17:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:17.907 17:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:17.907 17:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.907 17:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.190 spdk_targetn1 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.190 [2024-07-12 17:37:39.476188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.190 [2024-07-12 17:37:39.509099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:21.190 17:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:21.190 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.473 Initializing NVMe Controllers 00:29:24.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:24.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:24.473 Initialization complete. Launching workers. 00:29:24.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15891, failed: 0 00:29:24.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1403, failed to submit 14488 00:29:24.473 success 744, unsuccess 659, failed 0 00:29:24.473 17:37:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:24.473 17:37:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:24.473 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.755 Initializing NVMe Controllers 00:29:27.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:27.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:27.755 Initialization complete. Launching workers. 00:29:27.755 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8592, failed: 0 00:29:27.755 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 7323 00:29:27.755 success 309, unsuccess 960, failed 0 00:29:27.755 17:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:27.755 17:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:27.755 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.043 Initializing NVMe Controllers 00:29:31.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:31.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:31.043 Initialization complete. Launching workers. 00:29:31.043 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37884, failed: 0 00:29:31.043 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2839, failed to submit 35045 00:29:31.043 success 596, unsuccess 2243, failed 0 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.043 17:37:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 62662 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 62662 ']' 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 62662 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62662 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62662' 00:29:31.981 killing process with pid 62662 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 62662 00:29:31.981 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 62662 00:29:32.241 00:29:32.241 real 0m14.162s 00:29:32.241 user 0m56.541s 00:29:32.241 sys 0m2.226s 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.241 ************************************ 00:29:32.241 END TEST spdk_target_abort 00:29:32.241 ************************************ 00:29:32.241 17:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:32.241 17:37:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:32.241 17:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:32.241 17:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.241 17:37:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:32.241 ************************************ 00:29:32.241 START TEST kernel_target_abort 00:29:32.241 ************************************ 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:32.241 17:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:34.778 Waiting for block devices as requested 00:29:34.778 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:34.778 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:34.778 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:35.037 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:35.037 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:35.037 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:35.037 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:35.296 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:35.296 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:35.296 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:35.556 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:35.556 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:35.556 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:35.556 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:35.815 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:35.815 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:35.815 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:36.075 No valid GPT data, bailing 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:36.075 00:29:36.075 Discovery Log Number of Records 2, Generation counter 2 00:29:36.075 =====Discovery Log Entry 0====== 00:29:36.075 trtype: tcp 00:29:36.075 adrfam: ipv4 00:29:36.075 subtype: current discovery subsystem 00:29:36.075 treq: not specified, sq flow control disable supported 00:29:36.075 portid: 1 00:29:36.075 trsvcid: 4420 00:29:36.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:36.075 traddr: 10.0.0.1 00:29:36.075 eflags: none 00:29:36.075 sectype: none 00:29:36.075 =====Discovery Log Entry 1====== 00:29:36.075 trtype: tcp 00:29:36.075 adrfam: ipv4 00:29:36.075 subtype: nvme subsystem 00:29:36.075 treq: not specified, sq flow control disable supported 00:29:36.075 portid: 1 00:29:36.075 trsvcid: 4420 00:29:36.075 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:36.075 traddr: 10.0.0.1 00:29:36.075 eflags: none 00:29:36.075 sectype: none 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:36.075 17:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:36.075 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.362 Initializing NVMe Controllers 00:29:39.362 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:39.362 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:39.362 Initialization complete. Launching workers. 00:29:39.362 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86243, failed: 0 00:29:39.362 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 86243, failed to submit 0 00:29:39.362 success 0, unsuccess 86243, failed 0 00:29:39.362 17:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:39.362 17:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:39.362 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.653 Initializing NVMe Controllers 00:29:42.653 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:42.653 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:42.653 Initialization complete. Launching workers. 00:29:42.653 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139975, failed: 0 00:29:42.653 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34786, failed to submit 105189 00:29:42.653 success 0, unsuccess 34786, failed 0 00:29:42.653 17:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:42.653 17:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:42.653 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.001 Initializing NVMe Controllers 00:29:46.002 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:46.002 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:46.002 Initialization complete. Launching workers. 00:29:46.002 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133783, failed: 0 00:29:46.002 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33490, failed to submit 100293 00:29:46.002 success 0, unsuccess 33490, failed 0 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:46.002 17:38:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:47.905 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:47.905 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:48.163 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:48.163 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:48.163 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:49.101 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:49.101 00:29:49.101 real 0m16.799s 00:29:49.101 user 0m8.419s 00:29:49.101 sys 0m4.704s 00:29:49.101 17:38:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.101 17:38:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.101 ************************************ 00:29:49.101 END TEST kernel_target_abort 00:29:49.101 ************************************ 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:49.101 rmmod nvme_tcp 00:29:49.101 rmmod nvme_fabrics 00:29:49.101 rmmod nvme_keyring 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 62662 ']' 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 62662 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 62662 ']' 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 62662 00:29:49.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (62662) - No such process 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 62662 is not found' 00:29:49.101 Process with pid 62662 is not found 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:49.101 17:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:51.638 Waiting for block devices as requested 00:29:51.638 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:51.638 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:51.638 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:51.638 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:51.638 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:51.638 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:51.638 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:51.897 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:51.897 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:51.897 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.155 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:52.155 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:52.155 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:52.155 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:52.413 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:52.413 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:52.413 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:52.671 17:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.573 17:38:13 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:54.573 00:29:54.573 real 0m46.756s 00:29:54.573 user 1m8.799s 00:29:54.573 sys 0m14.753s 00:29:54.573 17:38:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:54.573 17:38:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:54.573 ************************************ 00:29:54.573 END TEST nvmf_abort_qd_sizes 00:29:54.573 ************************************ 00:29:54.573 17:38:13 -- common/autotest_common.sh@1142 -- # return 0 00:29:54.573 17:38:13 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:54.573 17:38:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:54.573 17:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:54.573 17:38:13 -- common/autotest_common.sh@10 -- # set +x 00:29:54.573 ************************************ 00:29:54.573 START TEST keyring_file 00:29:54.573 ************************************ 00:29:54.573 17:38:13 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:54.832 * Looking for test storage... 00:29:54.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:54.832 17:38:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:54.832 17:38:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.832 17:38:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.833 17:38:13 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.833 17:38:13 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.833 17:38:13 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.833 17:38:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.833 17:38:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.833 17:38:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.833 17:38:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:54.833 17:38:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HxuWACeFNr 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HxuWACeFNr 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HxuWACeFNr 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HxuWACeFNr 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.77icrxVwPo 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:54.833 17:38:13 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.77icrxVwPo 00:29:54.833 17:38:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.77icrxVwPo 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.77icrxVwPo 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=71392 00:29:54.833 17:38:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 71392 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 71392 ']' 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.833 17:38:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:54.833 [2024-07-12 17:38:13.597034] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:29:54.833 [2024-07-12 17:38:13.597087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:29:55.092 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.092 [2024-07-12 17:38:13.647923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.092 [2024-07-12 17:38:13.726235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.662 17:38:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:55.662 17:38:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:55.662 17:38:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:55.662 17:38:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.662 17:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:55.662 [2024-07-12 17:38:14.413839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.662 null0 00:29:55.921 [2024-07-12 17:38:14.445883] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:55.921 [2024-07-12 17:38:14.446153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:55.921 [2024-07-12 17:38:14.453899] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.921 17:38:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:55.921 [2024-07-12 17:38:14.465931] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:55.921 request: 00:29:55.921 { 00:29:55.921 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:55.921 "secure_channel": false, 00:29:55.921 "listen_address": { 00:29:55.921 "trtype": "tcp", 00:29:55.921 "traddr": "127.0.0.1", 00:29:55.921 "trsvcid": "4420" 00:29:55.921 }, 00:29:55.921 "method": "nvmf_subsystem_add_listener", 00:29:55.921 "req_id": 1 00:29:55.921 } 00:29:55.921 Got JSON-RPC error response 00:29:55.921 response: 00:29:55.921 { 00:29:55.921 "code": -32602, 00:29:55.921 "message": "Invalid parameters" 00:29:55.921 } 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:55.921 17:38:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=71434 00:29:55.921 17:38:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 71434 /var/tmp/bperf.sock 00:29:55.921 17:38:14 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 71434 ']' 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:55.921 17:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:55.921 [2024-07-12 17:38:14.518748] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:29:55.921 [2024-07-12 17:38:14.518787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71434 ] 00:29:55.921 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.921 [2024-07-12 17:38:14.571881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.921 [2024-07-12 17:38:14.651597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.858 17:38:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:56.858 17:38:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:56.858 17:38:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:29:56.858 17:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:29:56.858 17:38:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.77icrxVwPo 00:29:56.858 17:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.77icrxVwPo 00:29:57.116 17:38:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:57.116 17:38:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:57.116 17:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.117 17:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:57.117 17:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:57.117 17:38:15 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.HxuWACeFNr == \/\t\m\p\/\t\m\p\.\H\x\u\W\A\C\e\F\N\r ]] 00:29:57.117 17:38:15 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:57.117 17:38:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:57.117 17:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.117 17:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:57.117 17:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:57.374 17:38:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.77icrxVwPo == \/\t\m\p\/\t\m\p\.\7\7\i\c\r\x\V\w\P\o ]] 00:29:57.374 17:38:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:57.374 17:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:57.374 17:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:57.374 17:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.374 17:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:57.374 17:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:57.633 17:38:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:57.633 17:38:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:57.633 17:38:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:57.633 17:38:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:57.633 17:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:57.891 [2024-07-12 17:38:16.534056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:57.891 nvme0n1 00:29:57.891 17:38:16 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:57.891 17:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:57.891 17:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:57.891 17:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:57.891 17:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:57.891 17:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.150 17:38:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:58.150 17:38:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:58.150 17:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:58.150 17:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:58.150 17:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:58.150 17:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.150 17:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:58.409 17:38:16 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:58.409 17:38:16 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:58.409 Running I/O for 1 seconds... 00:29:59.344 00:29:59.344 Latency(us) 00:29:59.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.344 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:59.344 nvme0n1 : 1.00 16201.92 63.29 0.00 0.00 7879.61 4131.62 13221.18 00:29:59.344 =================================================================================================================== 00:29:59.344 Total : 16201.92 63.29 0.00 0.00 7879.61 4131.62 13221.18 00:29:59.344 0 00:29:59.344 17:38:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:59.344 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:59.603 17:38:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:59.603 17:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:59.603 17:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:59.603 17:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.603 17:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:59.603 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.862 17:38:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:59.862 17:38:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:59.862 17:38:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:59.862 17:38:18 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:59.862 17:38:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:59.862 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:00.120 [2024-07-12 17:38:18.769384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:00.120 [2024-07-12 17:38:18.769927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee770 (107): Transport endpoint is not connected 00:30:00.120 [2024-07-12 17:38:18.770922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfee770 (9): Bad file descriptor 00:30:00.120 [2024-07-12 17:38:18.771924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:00.120 [2024-07-12 17:38:18.771933] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:00.120 [2024-07-12 17:38:18.771939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:00.120 request: 00:30:00.120 { 00:30:00.120 "name": "nvme0", 00:30:00.120 "trtype": "tcp", 00:30:00.120 "traddr": "127.0.0.1", 00:30:00.120 "adrfam": "ipv4", 00:30:00.120 "trsvcid": "4420", 00:30:00.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:00.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:00.121 "prchk_reftag": false, 00:30:00.121 "prchk_guard": false, 00:30:00.121 "hdgst": false, 00:30:00.121 "ddgst": false, 00:30:00.121 "psk": "key1", 00:30:00.121 "method": "bdev_nvme_attach_controller", 00:30:00.121 "req_id": 1 00:30:00.121 } 00:30:00.121 Got JSON-RPC error response 00:30:00.121 response: 00:30:00.121 { 00:30:00.121 "code": -5, 00:30:00.121 "message": "Input/output error" 00:30:00.121 } 00:30:00.121 17:38:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:00.121 17:38:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:00.121 17:38:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:00.121 17:38:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:00.121 17:38:18 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:00.121 17:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:00.121 17:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.121 17:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:00.121 17:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.121 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.380 17:38:18 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:00.380 17:38:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:00.380 17:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:00.380 17:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:00.380 17:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:00.380 17:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:00.380 17:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.380 17:38:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:00.380 17:38:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:00.380 17:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:00.638 17:38:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:00.638 17:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:00.896 17:38:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:00.896 17:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.896 17:38:19 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:00.896 17:38:19 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:00.897 17:38:19 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HxuWACeFNr 00:30:00.897 17:38:19 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:00.897 17:38:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:00.897 17:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:01.155 [2024-07-12 17:38:19.810993] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HxuWACeFNr': 0100660 00:30:01.155 [2024-07-12 17:38:19.811019] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:01.155 request: 00:30:01.155 { 00:30:01.155 "name": "key0", 00:30:01.155 "path": "/tmp/tmp.HxuWACeFNr", 00:30:01.155 "method": "keyring_file_add_key", 00:30:01.155 "req_id": 1 00:30:01.155 } 00:30:01.155 Got JSON-RPC error response 00:30:01.155 response: 00:30:01.155 { 00:30:01.155 "code": -1, 00:30:01.155 "message": "Operation not permitted" 00:30:01.155 } 00:30:01.155 17:38:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:01.155 17:38:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:01.155 17:38:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:01.155 17:38:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:01.155 17:38:19 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HxuWACeFNr 00:30:01.155 17:38:19 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:01.155 17:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HxuWACeFNr 00:30:01.413 17:38:19 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HxuWACeFNr 00:30:01.413 17:38:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:01.413 17:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:01.413 17:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:01.413 17:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:01.413 17:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:01.413 17:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:01.672 17:38:20 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:01.672 17:38:20 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:01.672 17:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:01.672 [2024-07-12 17:38:20.348421] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HxuWACeFNr': No such file or directory 00:30:01.672 [2024-07-12 17:38:20.348439] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:01.672 [2024-07-12 17:38:20.348459] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:01.672 [2024-07-12 17:38:20.348465] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:01.672 [2024-07-12 17:38:20.348471] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:01.672 request: 00:30:01.672 { 00:30:01.672 "name": "nvme0", 00:30:01.672 "trtype": "tcp", 00:30:01.672 "traddr": "127.0.0.1", 00:30:01.672 "adrfam": "ipv4", 00:30:01.672 "trsvcid": "4420", 00:30:01.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.672 "prchk_reftag": false, 00:30:01.672 "prchk_guard": false, 00:30:01.672 "hdgst": false, 00:30:01.672 "ddgst": false, 00:30:01.672 "psk": "key0", 00:30:01.672 "method": "bdev_nvme_attach_controller", 00:30:01.672 "req_id": 1 00:30:01.672 } 00:30:01.672 Got JSON-RPC error response 00:30:01.672 response: 00:30:01.672 { 00:30:01.672 "code": -19, 00:30:01.672 "message": "No such device" 00:30:01.672 } 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:01.672 17:38:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:01.672 17:38:20 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:01.672 17:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:01.930 17:38:20 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IhVVczjbgN 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:01.930 17:38:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IhVVczjbgN 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IhVVczjbgN 00:30:01.930 17:38:20 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IhVVczjbgN 00:30:01.930 17:38:20 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IhVVczjbgN 00:30:01.930 17:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IhVVczjbgN 00:30:02.188 17:38:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:02.188 17:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:02.447 nvme0n1 00:30:02.447 17:38:21 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.447 17:38:21 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:02.447 17:38:21 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:02.447 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:02.707 17:38:21 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:02.707 17:38:21 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:02.707 17:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.707 17:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:02.707 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.965 17:38:21 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:02.965 17:38:21 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:02.965 17:38:21 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:02.965 17:38:21 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:02.965 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:03.224 17:38:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:03.224 17:38:21 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:03.224 17:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:03.483 17:38:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:03.483 17:38:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IhVVczjbgN 00:30:03.483 17:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IhVVczjbgN 00:30:03.483 17:38:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.77icrxVwPo 00:30:03.483 17:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.77icrxVwPo 00:30:03.740 17:38:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:03.740 17:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:04.029 nvme0n1 00:30:04.029 17:38:22 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:04.029 17:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:04.292 17:38:22 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:04.292 "subsystems": [ 00:30:04.292 { 00:30:04.292 "subsystem": "keyring", 00:30:04.292 "config": [ 00:30:04.292 { 00:30:04.292 "method": "keyring_file_add_key", 00:30:04.292 "params": { 00:30:04.292 "name": "key0", 00:30:04.292 "path": "/tmp/tmp.IhVVczjbgN" 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "keyring_file_add_key", 00:30:04.292 "params": { 00:30:04.292 "name": "key1", 00:30:04.292 "path": "/tmp/tmp.77icrxVwPo" 00:30:04.292 } 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "iobuf", 00:30:04.292 "config": [ 00:30:04.292 { 00:30:04.292 "method": "iobuf_set_options", 00:30:04.292 "params": { 00:30:04.292 "small_pool_count": 8192, 00:30:04.292 "large_pool_count": 1024, 00:30:04.292 "small_bufsize": 8192, 00:30:04.292 "large_bufsize": 135168 00:30:04.292 } 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "sock", 00:30:04.292 "config": [ 00:30:04.292 { 00:30:04.292 "method": "sock_set_default_impl", 00:30:04.292 "params": { 00:30:04.292 "impl_name": "posix" 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "sock_impl_set_options", 00:30:04.292 "params": { 00:30:04.292 "impl_name": "ssl", 00:30:04.292 "recv_buf_size": 4096, 00:30:04.292 "send_buf_size": 4096, 00:30:04.292 "enable_recv_pipe": true, 00:30:04.292 "enable_quickack": false, 00:30:04.292 "enable_placement_id": 0, 00:30:04.292 "enable_zerocopy_send_server": true, 00:30:04.292 "enable_zerocopy_send_client": false, 00:30:04.292 "zerocopy_threshold": 0, 00:30:04.292 "tls_version": 0, 00:30:04.292 "enable_ktls": false 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "sock_impl_set_options", 00:30:04.292 "params": { 00:30:04.292 "impl_name": "posix", 00:30:04.292 "recv_buf_size": 2097152, 00:30:04.292 "send_buf_size": 2097152, 00:30:04.292 "enable_recv_pipe": true, 00:30:04.292 "enable_quickack": false, 00:30:04.292 "enable_placement_id": 0, 00:30:04.292 "enable_zerocopy_send_server": true, 00:30:04.292 "enable_zerocopy_send_client": false, 00:30:04.292 "zerocopy_threshold": 0, 00:30:04.292 "tls_version": 0, 00:30:04.292 "enable_ktls": false 00:30:04.292 } 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "vmd", 00:30:04.292 "config": [] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "accel", 00:30:04.292 "config": [ 00:30:04.292 { 00:30:04.292 "method": "accel_set_options", 00:30:04.292 "params": { 00:30:04.292 "small_cache_size": 128, 00:30:04.292 "large_cache_size": 16, 00:30:04.292 "task_count": 2048, 00:30:04.292 "sequence_count": 2048, 00:30:04.292 "buf_count": 2048 00:30:04.292 } 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "bdev", 00:30:04.292 "config": [ 00:30:04.292 { 00:30:04.292 "method": "bdev_set_options", 00:30:04.292 "params": { 00:30:04.292 "bdev_io_pool_size": 65535, 00:30:04.292 "bdev_io_cache_size": 256, 00:30:04.292 "bdev_auto_examine": true, 00:30:04.292 "iobuf_small_cache_size": 128, 00:30:04.292 "iobuf_large_cache_size": 16 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_raid_set_options", 00:30:04.292 "params": { 00:30:04.292 "process_window_size_kb": 1024 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_iscsi_set_options", 00:30:04.292 "params": { 00:30:04.292 "timeout_sec": 30 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_nvme_set_options", 00:30:04.292 "params": { 00:30:04.292 "action_on_timeout": "none", 00:30:04.292 "timeout_us": 0, 00:30:04.292 "timeout_admin_us": 0, 00:30:04.292 "keep_alive_timeout_ms": 10000, 00:30:04.292 "arbitration_burst": 0, 00:30:04.292 "low_priority_weight": 0, 00:30:04.292 "medium_priority_weight": 0, 00:30:04.292 "high_priority_weight": 0, 00:30:04.292 "nvme_adminq_poll_period_us": 10000, 00:30:04.292 "nvme_ioq_poll_period_us": 0, 00:30:04.292 "io_queue_requests": 512, 00:30:04.292 "delay_cmd_submit": true, 00:30:04.292 "transport_retry_count": 4, 00:30:04.292 "bdev_retry_count": 3, 00:30:04.292 "transport_ack_timeout": 0, 00:30:04.292 "ctrlr_loss_timeout_sec": 0, 00:30:04.292 "reconnect_delay_sec": 0, 00:30:04.292 "fast_io_fail_timeout_sec": 0, 00:30:04.292 "disable_auto_failback": false, 00:30:04.292 "generate_uuids": false, 00:30:04.292 "transport_tos": 0, 00:30:04.292 "nvme_error_stat": false, 00:30:04.292 "rdma_srq_size": 0, 00:30:04.292 "io_path_stat": false, 00:30:04.292 "allow_accel_sequence": false, 00:30:04.292 "rdma_max_cq_size": 0, 00:30:04.292 "rdma_cm_event_timeout_ms": 0, 00:30:04.292 "dhchap_digests": [ 00:30:04.292 "sha256", 00:30:04.292 "sha384", 00:30:04.292 "sha512" 00:30:04.292 ], 00:30:04.292 "dhchap_dhgroups": [ 00:30:04.292 "null", 00:30:04.292 "ffdhe2048", 00:30:04.292 "ffdhe3072", 00:30:04.292 "ffdhe4096", 00:30:04.292 "ffdhe6144", 00:30:04.292 "ffdhe8192" 00:30:04.292 ] 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_nvme_attach_controller", 00:30:04.292 "params": { 00:30:04.292 "name": "nvme0", 00:30:04.292 "trtype": "TCP", 00:30:04.292 "adrfam": "IPv4", 00:30:04.292 "traddr": "127.0.0.1", 00:30:04.292 "trsvcid": "4420", 00:30:04.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.292 "prchk_reftag": false, 00:30:04.292 "prchk_guard": false, 00:30:04.292 "ctrlr_loss_timeout_sec": 0, 00:30:04.292 "reconnect_delay_sec": 0, 00:30:04.292 "fast_io_fail_timeout_sec": 0, 00:30:04.292 "psk": "key0", 00:30:04.292 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.292 "hdgst": false, 00:30:04.292 "ddgst": false 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_nvme_set_hotplug", 00:30:04.292 "params": { 00:30:04.292 "period_us": 100000, 00:30:04.292 "enable": false 00:30:04.292 } 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "method": "bdev_wait_for_examine" 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }, 00:30:04.292 { 00:30:04.292 "subsystem": "nbd", 00:30:04.292 "config": [] 00:30:04.292 } 00:30:04.292 ] 00:30:04.292 }' 00:30:04.292 17:38:22 keyring_file -- keyring/file.sh@114 -- # killprocess 71434 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 71434 ']' 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@952 -- # kill -0 71434 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71434 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71434' 00:30:04.293 killing process with pid 71434 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@967 -- # kill 71434 00:30:04.293 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.293 00:30:04.293 Latency(us) 00:30:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.293 =================================================================================================================== 00:30:04.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.293 17:38:22 keyring_file -- common/autotest_common.sh@972 -- # wait 71434 00:30:04.552 17:38:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=72947 00:30:04.552 17:38:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 72947 /var/tmp/bperf.sock 00:30:04.552 17:38:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 72947 ']' 00:30:04.552 17:38:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:04.552 17:38:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:04.552 17:38:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.552 17:38:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:04.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:04.552 17:38:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:04.552 "subsystems": [ 00:30:04.552 { 00:30:04.552 "subsystem": "keyring", 00:30:04.552 "config": [ 00:30:04.552 { 00:30:04.552 "method": "keyring_file_add_key", 00:30:04.552 "params": { 00:30:04.552 "name": "key0", 00:30:04.552 "path": "/tmp/tmp.IhVVczjbgN" 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "keyring_file_add_key", 00:30:04.552 "params": { 00:30:04.552 "name": "key1", 00:30:04.552 "path": "/tmp/tmp.77icrxVwPo" 00:30:04.552 } 00:30:04.552 } 00:30:04.552 ] 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "subsystem": "iobuf", 00:30:04.552 "config": [ 00:30:04.552 { 00:30:04.552 "method": "iobuf_set_options", 00:30:04.552 "params": { 00:30:04.552 "small_pool_count": 8192, 00:30:04.552 "large_pool_count": 1024, 00:30:04.552 "small_bufsize": 8192, 00:30:04.552 "large_bufsize": 135168 00:30:04.552 } 00:30:04.552 } 00:30:04.552 ] 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "subsystem": "sock", 00:30:04.552 "config": [ 00:30:04.552 { 00:30:04.552 "method": "sock_set_default_impl", 00:30:04.552 "params": { 00:30:04.552 "impl_name": "posix" 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "sock_impl_set_options", 00:30:04.552 "params": { 00:30:04.552 "impl_name": "ssl", 00:30:04.552 "recv_buf_size": 4096, 00:30:04.552 "send_buf_size": 4096, 00:30:04.552 "enable_recv_pipe": true, 00:30:04.552 "enable_quickack": false, 00:30:04.552 "enable_placement_id": 0, 00:30:04.552 "enable_zerocopy_send_server": true, 00:30:04.552 "enable_zerocopy_send_client": false, 00:30:04.552 "zerocopy_threshold": 0, 00:30:04.552 "tls_version": 0, 00:30:04.552 "enable_ktls": false 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "sock_impl_set_options", 00:30:04.552 "params": { 00:30:04.552 "impl_name": "posix", 00:30:04.552 "recv_buf_size": 2097152, 00:30:04.552 "send_buf_size": 2097152, 00:30:04.552 "enable_recv_pipe": true, 00:30:04.552 "enable_quickack": false, 00:30:04.552 "enable_placement_id": 0, 00:30:04.552 "enable_zerocopy_send_server": true, 00:30:04.552 "enable_zerocopy_send_client": false, 00:30:04.552 "zerocopy_threshold": 0, 00:30:04.552 "tls_version": 0, 00:30:04.552 "enable_ktls": false 00:30:04.552 } 00:30:04.552 } 00:30:04.552 ] 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "subsystem": "vmd", 00:30:04.552 "config": [] 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "subsystem": "accel", 00:30:04.552 "config": [ 00:30:04.552 { 00:30:04.552 "method": "accel_set_options", 00:30:04.552 "params": { 00:30:04.552 "small_cache_size": 128, 00:30:04.552 "large_cache_size": 16, 00:30:04.552 "task_count": 2048, 00:30:04.552 "sequence_count": 2048, 00:30:04.552 "buf_count": 2048 00:30:04.552 } 00:30:04.552 } 00:30:04.552 ] 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "subsystem": "bdev", 00:30:04.552 "config": [ 00:30:04.552 { 00:30:04.552 "method": "bdev_set_options", 00:30:04.552 "params": { 00:30:04.552 "bdev_io_pool_size": 65535, 00:30:04.552 "bdev_io_cache_size": 256, 00:30:04.552 "bdev_auto_examine": true, 00:30:04.552 "iobuf_small_cache_size": 128, 00:30:04.552 "iobuf_large_cache_size": 16 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "bdev_raid_set_options", 00:30:04.552 "params": { 00:30:04.552 "process_window_size_kb": 1024 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "bdev_iscsi_set_options", 00:30:04.552 "params": { 00:30:04.552 "timeout_sec": 30 00:30:04.552 } 00:30:04.552 }, 00:30:04.552 { 00:30:04.552 "method": "bdev_nvme_set_options", 00:30:04.552 "params": { 00:30:04.552 "action_on_timeout": "none", 00:30:04.552 "timeout_us": 0, 00:30:04.552 "timeout_admin_us": 0, 00:30:04.552 "keep_alive_timeout_ms": 10000, 00:30:04.552 "arbitration_burst": 0, 00:30:04.552 "low_priority_weight": 0, 00:30:04.552 "medium_priority_weight": 0, 00:30:04.552 "high_priority_weight": 0, 00:30:04.552 "nvme_adminq_poll_period_us": 10000, 00:30:04.552 "nvme_ioq_poll_period_us": 0, 00:30:04.552 "io_queue_requests": 512, 00:30:04.552 "delay_cmd_submit": true, 00:30:04.553 "transport_retry_count": 4, 00:30:04.553 "bdev_retry_count": 3, 00:30:04.553 "transport_ack_timeout": 0, 00:30:04.553 "ctrlr_loss_timeout_sec": 0, 00:30:04.553 "reconnect_delay_sec": 0, 00:30:04.553 "fast_io_fail_timeout_sec": 0, 00:30:04.553 "disable_auto_failback": false, 00:30:04.553 "generate_uuids": false, 00:30:04.553 "transport_tos": 0, 00:30:04.553 "nvme_error_stat": false, 00:30:04.553 "rdma_srq_size": 0, 00:30:04.553 "io_path_stat": false, 00:30:04.553 "allow_accel_sequence": false, 00:30:04.553 "rdma_max_cq_size": 0, 00:30:04.553 "rdma_cm_event_timeout_ms": 0, 00:30:04.553 "dhchap_digests": [ 00:30:04.553 "sha256", 00:30:04.553 "sha384", 00:30:04.553 "sha512" 00:30:04.553 ], 00:30:04.553 "dhchap_dhgroups": [ 00:30:04.553 "null", 00:30:04.553 "ffdhe2048", 00:30:04.553 "ffdhe3072", 00:30:04.553 "ffdhe4096", 00:30:04.553 "ffdhe6144", 00:30:04.553 "ffdhe8192" 00:30:04.553 ] 00:30:04.553 } 00:30:04.553 }, 00:30:04.553 { 00:30:04.553 "method": "bdev_nvme_attach_controller", 00:30:04.553 "params": { 00:30:04.553 "name": "nvme0", 00:30:04.553 "trtype": "TCP", 00:30:04.553 "adrfam": "IPv4", 00:30:04.553 "traddr": "127.0.0.1", 00:30:04.553 "trsvcid": "4420", 00:30:04.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.553 "prchk_reftag": false, 00:30:04.553 "prchk_guard": false, 00:30:04.553 "ctrlr_loss_timeout_sec": 0, 00:30:04.553 "reconnect_delay_sec": 0, 00:30:04.553 "fast_io_fail_timeout_sec": 0, 00:30:04.553 "psk": "key0", 00:30:04.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.553 "hdgst": false, 00:30:04.553 "ddgst": false 00:30:04.553 } 00:30:04.553 }, 00:30:04.553 { 00:30:04.553 "method": "bdev_nvme_set_hotplug", 00:30:04.553 "params": { 00:30:04.553 "period_us": 100000, 00:30:04.553 "enable": false 00:30:04.553 } 00:30:04.553 }, 00:30:04.553 { 00:30:04.553 "method": "bdev_wait_for_examine" 00:30:04.553 } 00:30:04.553 ] 00:30:04.553 }, 00:30:04.553 { 00:30:04.553 "subsystem": "nbd", 00:30:04.553 "config": [] 00:30:04.553 } 00:30:04.553 ] 00:30:04.553 }' 00:30:04.553 17:38:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.553 17:38:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:04.553 [2024-07-12 17:38:23.160640] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:30:04.553 [2024-07-12 17:38:23.160691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72947 ] 00:30:04.553 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.553 [2024-07-12 17:38:23.214786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.553 [2024-07-12 17:38:23.282666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.812 [2024-07-12 17:38:23.442069] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:05.381 17:38:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.381 17:38:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:05.381 17:38:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:05.381 17:38:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:05.381 17:38:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:05.381 17:38:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:05.381 17:38:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:05.381 17:38:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:05.381 17:38:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:05.381 17:38:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:05.381 17:38:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:05.381 17:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:05.640 17:38:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:05.640 17:38:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:05.640 17:38:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:05.640 17:38:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:05.640 17:38:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:05.640 17:38:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:05.640 17:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:05.899 17:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:05.899 17:38:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IhVVczjbgN /tmp/tmp.77icrxVwPo 00:30:06.158 17:38:24 keyring_file -- keyring/file.sh@20 -- # killprocess 72947 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 72947 ']' 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 72947 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72947 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72947' 00:30:06.158 killing process with pid 72947 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@967 -- # kill 72947 00:30:06.158 Received shutdown signal, test time was about 1.000000 seconds 00:30:06.158 00:30:06.158 Latency(us) 00:30:06.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.158 =================================================================================================================== 00:30:06.158 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@972 -- # wait 72947 00:30:06.158 17:38:24 keyring_file -- keyring/file.sh@21 -- # killprocess 71392 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 71392 ']' 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@952 -- # kill -0 71392 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:06.158 17:38:24 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71392 00:30:06.418 17:38:24 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:06.418 17:38:24 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:06.418 17:38:24 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71392' 00:30:06.418 killing process with pid 71392 00:30:06.418 17:38:24 keyring_file -- common/autotest_common.sh@967 -- # kill 71392 00:30:06.418 [2024-07-12 17:38:24.944685] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:06.418 17:38:24 keyring_file -- common/autotest_common.sh@972 -- # wait 71392 00:30:06.677 00:30:06.677 real 0m11.920s 00:30:06.677 user 0m28.499s 00:30:06.677 sys 0m2.664s 00:30:06.677 17:38:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.677 17:38:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:06.677 ************************************ 00:30:06.677 END TEST keyring_file 00:30:06.677 ************************************ 00:30:06.677 17:38:25 -- common/autotest_common.sh@1142 -- # return 0 00:30:06.677 17:38:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:06.677 17:38:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:06.677 17:38:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:06.677 17:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.677 17:38:25 -- common/autotest_common.sh@10 -- # set +x 00:30:06.677 ************************************ 00:30:06.677 START TEST keyring_linux 00:30:06.677 ************************************ 00:30:06.677 17:38:25 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:06.677 * Looking for test storage... 00:30:06.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:06.677 17:38:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:06.677 17:38:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.677 17:38:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.677 17:38:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.677 17:38:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.677 17:38:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.678 17:38:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.678 17:38:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.678 17:38:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.678 17:38:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:06.678 17:38:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:06.678 17:38:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:06.678 17:38:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:06.678 17:38:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:06.938 /tmp/:spdk-test:key0 00:30:06.938 17:38:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:06.938 17:38:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:06.938 17:38:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:06.938 /tmp/:spdk-test:key1 00:30:06.938 17:38:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:06.938 17:38:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=73497 00:30:06.938 17:38:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 73497 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 73497 ']' 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.938 17:38:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:06.938 [2024-07-12 17:38:25.556766] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:30:06.938 [2024-07-12 17:38:25.556809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73497 ] 00:30:06.938 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.938 [2024-07-12 17:38:25.609732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.938 [2024-07-12 17:38:25.687842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:07.875 [2024-07-12 17:38:26.378685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.875 null0 00:30:07.875 [2024-07-12 17:38:26.410742] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:07.875 [2024-07-12 17:38:26.411081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:07.875 339719566 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:07.875 944268599 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=73650 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 73650 /var/tmp/bperf.sock 00:30:07.875 17:38:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 73650 ']' 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:07.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.875 17:38:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:07.875 [2024-07-12 17:38:26.482749] Starting SPDK v24.09-pre git sha1 a0b7842f9 / DPDK 24.03.0 initialization... 00:30:07.875 [2024-07-12 17:38:26.482790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73650 ] 00:30:07.875 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.875 [2024-07-12 17:38:26.536862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.875 [2024-07-12 17:38:26.616399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.813 17:38:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.813 17:38:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:08.813 17:38:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:08.813 17:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:08.813 17:38:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:08.813 17:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:09.073 17:38:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:09.073 17:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:09.333 [2024-07-12 17:38:27.853444] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:09.333 nvme0n1 00:30:09.333 17:38:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:09.333 17:38:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:09.333 17:38:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:09.333 17:38:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:09.333 17:38:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:09.333 17:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:09.592 17:38:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:09.592 17:38:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:09.592 17:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@25 -- # sn=339719566 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 339719566 == \3\3\9\7\1\9\5\6\6 ]] 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 339719566 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:09.592 17:38:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.851 Running I/O for 1 seconds... 00:30:10.788 00:30:10.788 Latency(us) 00:30:10.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:10.789 nvme0n1 : 1.01 17703.87 69.16 0.00 0.00 7201.57 2308.01 9289.02 00:30:10.789 =================================================================================================================== 00:30:10.789 Total : 17703.87 69.16 0.00 0.00 7201.57 2308.01 9289.02 00:30:10.789 0 00:30:10.789 17:38:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:10.789 17:38:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:11.048 17:38:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:11.048 17:38:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:11.048 17:38:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:11.048 17:38:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:11.307 [2024-07-12 17:38:29.921664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:11.307 [2024-07-12 17:38:29.922423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511fd0 (107): Transport endpoint is not connected 00:30:11.307 [2024-07-12 17:38:29.923418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2511fd0 (9): Bad file descriptor 00:30:11.307 [2024-07-12 17:38:29.924419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:11.307 [2024-07-12 17:38:29.924428] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:11.307 [2024-07-12 17:38:29.924438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:11.307 request: 00:30:11.307 { 00:30:11.307 "name": "nvme0", 00:30:11.307 "trtype": "tcp", 00:30:11.307 "traddr": "127.0.0.1", 00:30:11.307 "adrfam": "ipv4", 00:30:11.307 "trsvcid": "4420", 00:30:11.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.307 "prchk_reftag": false, 00:30:11.307 "prchk_guard": false, 00:30:11.307 "hdgst": false, 00:30:11.307 "ddgst": false, 00:30:11.307 "psk": ":spdk-test:key1", 00:30:11.307 "method": "bdev_nvme_attach_controller", 00:30:11.307 "req_id": 1 00:30:11.307 } 00:30:11.307 Got JSON-RPC error response 00:30:11.307 response: 00:30:11.307 { 00:30:11.307 "code": -5, 00:30:11.307 "message": "Input/output error" 00:30:11.307 } 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@33 -- # sn=339719566 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 339719566 00:30:11.307 1 links removed 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@33 -- # sn=944268599 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 944268599 00:30:11.307 1 links removed 00:30:11.307 17:38:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 73650 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 73650 ']' 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 73650 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73650 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73650' 00:30:11.307 killing process with pid 73650 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 73650 00:30:11.307 Received shutdown signal, test time was about 1.000000 seconds 00:30:11.307 00:30:11.307 Latency(us) 00:30:11.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.307 =================================================================================================================== 00:30:11.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:11.307 17:38:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 73650 00:30:11.566 17:38:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 73497 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 73497 ']' 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 73497 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73497 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73497' 00:30:11.566 killing process with pid 73497 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 73497 00:30:11.566 17:38:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 73497 00:30:11.825 00:30:11.825 real 0m5.207s 00:30:11.825 user 0m9.263s 00:30:11.825 sys 0m1.524s 00:30:11.825 17:38:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.825 17:38:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:11.825 ************************************ 00:30:11.825 END TEST keyring_linux 00:30:11.825 ************************************ 00:30:11.825 17:38:30 -- common/autotest_common.sh@1142 -- # return 0 00:30:11.825 17:38:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:11.825 17:38:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:11.825 17:38:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:11.825 17:38:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:11.825 17:38:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:11.825 17:38:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:11.825 17:38:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:11.825 17:38:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.825 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:30:11.825 17:38:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:11.825 17:38:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:11.825 17:38:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:11.825 17:38:30 -- common/autotest_common.sh@10 -- # set +x 00:30:17.097 INFO: APP EXITING 00:30:17.097 INFO: killing all VMs 00:30:17.097 INFO: killing vhost app 00:30:17.097 INFO: EXIT DONE 00:30:19.635 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:19.635 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:19.635 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:22.172 Cleaning 00:30:22.172 Removing: /var/run/dpdk/spdk0/config 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:22.172 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:22.172 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:22.172 Removing: /var/run/dpdk/spdk1/config 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:22.172 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:22.172 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:22.172 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:22.172 Removing: /var/run/dpdk/spdk2/config 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:22.172 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:22.172 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:22.172 Removing: /var/run/dpdk/spdk3/config 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:22.172 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:22.172 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:22.172 Removing: /var/run/dpdk/spdk4/config 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:22.172 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:22.172 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:22.172 Removing: /dev/shm/bdev_svc_trace.1 00:30:22.172 Removing: /dev/shm/nvmf_trace.0 00:30:22.172 Removing: /dev/shm/spdk_tgt_trace.pid3883599 00:30:22.172 Removing: /var/run/dpdk/spdk0 00:30:22.172 Removing: /var/run/dpdk/spdk1 00:30:22.172 Removing: /var/run/dpdk/spdk2 00:30:22.172 Removing: /var/run/dpdk/spdk3 00:30:22.172 Removing: /var/run/dpdk/spdk4 00:30:22.172 Removing: /var/run/dpdk/spdk_pid15833 00:30:22.172 Removing: /var/run/dpdk/spdk_pid16423 00:30:22.172 Removing: /var/run/dpdk/spdk_pid17122 00:30:22.172 Removing: /var/run/dpdk/spdk_pid17806 00:30:22.172 Removing: /var/run/dpdk/spdk_pid18574 00:30:22.172 Removing: /var/run/dpdk/spdk_pid19274 00:30:22.172 Removing: /var/run/dpdk/spdk_pid19978 00:30:22.172 Removing: /var/run/dpdk/spdk_pid20491 00:30:22.172 Removing: /var/run/dpdk/spdk_pid24711 00:30:22.172 Removing: /var/run/dpdk/spdk_pid24940 00:30:22.172 Removing: /var/run/dpdk/spdk_pid30993 00:30:22.172 Removing: /var/run/dpdk/spdk_pid31226 00:30:22.172 Removing: /var/run/dpdk/spdk_pid33447 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3881418 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3882530 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3883599 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3884234 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3885180 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3885425 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3886396 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3886628 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3886755 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3888315 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3889515 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3889799 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3890083 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3890387 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3890692 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3890931 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3891183 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3891455 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3892436 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3895419 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3895685 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3895952 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3896120 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3896462 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3896684 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3897177 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3897267 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3897542 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3897685 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3897946 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3898061 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3898513 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3898766 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3899051 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3899326 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3899348 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3899622 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3899876 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3900124 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3900375 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3900624 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3900870 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3901123 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3901371 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3901622 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3901877 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3902123 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3902378 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3902624 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3902871 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3903122 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3903372 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3903617 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3903877 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3904128 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3904376 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3904628 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3904756 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3905216 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3908862 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3952637 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3956816 00:30:22.172 Removing: /var/run/dpdk/spdk_pid3967280 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3972619 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3976458 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3977138 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3983130 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3989149 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3989151 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3990066 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3990876 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3991685 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3992375 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3992441 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3992725 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3992837 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3992843 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3993751 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3994671 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3995458 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3996054 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3996059 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3996295 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3997536 00:30:22.173 Removing: /var/run/dpdk/spdk_pid3998637 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4007149 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4007819 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4011944 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4017714 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4020311 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4030549 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4039368 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4041187 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4042060 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4059139 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4062899 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4087639 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4092136 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4093793 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4096094 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4096332 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4096570 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4096805 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4097318 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4099161 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4100148 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4100655 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4102964 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4103506 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4104198 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4108262 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4118190 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4122229 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4128218 00:30:22.173 Removing: /var/run/dpdk/spdk_pid4129520 00:30:22.432 Removing: /var/run/dpdk/spdk_pid4131067 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4135350 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4139954 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4147252 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4147350 00:30:22.433 Removing: /var/run/dpdk/spdk_pid41491 00:30:22.433 Removing: /var/run/dpdk/spdk_pid41516 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4151942 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4152123 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4152249 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4152642 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4152666 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4157118 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4157686 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4162019 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4164778 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4170166 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4175496 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4183920 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4191379 00:30:22.433 Removing: /var/run/dpdk/spdk_pid4191430 00:30:22.433 Removing: /var/run/dpdk/spdk_pid46555 00:30:22.433 Removing: /var/run/dpdk/spdk_pid48508 00:30:22.433 Removing: /var/run/dpdk/spdk_pid50470 00:30:22.433 Removing: /var/run/dpdk/spdk_pid51524 00:30:22.433 Removing: /var/run/dpdk/spdk_pid53625 00:30:22.433 Removing: /var/run/dpdk/spdk_pid54766 00:30:22.433 Removing: /var/run/dpdk/spdk_pid63288 00:30:22.433 Removing: /var/run/dpdk/spdk_pid63747 00:30:22.433 Removing: /var/run/dpdk/spdk_pid64420 00:30:22.433 Removing: /var/run/dpdk/spdk_pid66679 00:30:22.433 Removing: /var/run/dpdk/spdk_pid67147 00:30:22.433 Removing: /var/run/dpdk/spdk_pid67612 00:30:22.433 Removing: /var/run/dpdk/spdk_pid71392 00:30:22.433 Removing: /var/run/dpdk/spdk_pid71434 00:30:22.433 Removing: /var/run/dpdk/spdk_pid72947 00:30:22.433 Removing: /var/run/dpdk/spdk_pid73497 00:30:22.433 Removing: /var/run/dpdk/spdk_pid73650 00:30:22.433 Clean 00:30:22.433 17:38:41 -- common/autotest_common.sh@1451 -- # return 0 00:30:22.433 17:38:41 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:22.433 17:38:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.433 17:38:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.433 17:38:41 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:22.433 17:38:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.433 17:38:41 -- common/autotest_common.sh@10 -- # set +x 00:30:22.433 17:38:41 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:22.433 17:38:41 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:22.433 17:38:41 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:22.433 17:38:41 -- spdk/autotest.sh@391 -- # hash lcov 00:30:22.433 17:38:41 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:22.433 17:38:41 -- spdk/autotest.sh@393 -- # hostname 00:30:22.433 17:38:41 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:22.692 geninfo: WARNING: invalid characters removed from testname! 00:30:44.682 17:39:01 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:45.247 17:39:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:47.152 17:39:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:49.057 17:39:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:50.962 17:39:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:52.868 17:39:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:54.785 17:39:13 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:54.785 17:39:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.785 17:39:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:54.785 17:39:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.785 17:39:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.785 17:39:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.785 17:39:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.785 17:39:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.785 17:39:13 -- paths/export.sh@5 -- $ export PATH 00:30:54.785 17:39:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.785 17:39:13 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:54.785 17:39:13 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:54.785 17:39:13 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720798753.XXXXXX 00:30:54.785 17:39:13 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720798753.4uqU3L 00:30:54.785 17:39:13 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:54.785 17:39:13 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:54.785 17:39:13 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:54.785 17:39:13 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:54.785 17:39:13 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:54.785 17:39:13 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:54.785 17:39:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:54.785 17:39:13 -- common/autotest_common.sh@10 -- $ set +x 00:30:54.785 17:39:13 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:54.785 17:39:13 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:54.785 17:39:13 -- pm/common@17 -- $ local monitor 00:30:54.785 17:39:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.785 17:39:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.785 17:39:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.785 17:39:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:54.785 17:39:13 -- pm/common@25 -- $ sleep 1 00:30:54.785 17:39:13 -- pm/common@21 -- $ date +%s 00:30:54.785 17:39:13 -- pm/common@21 -- $ date +%s 00:30:54.785 17:39:13 -- pm/common@21 -- $ date +%s 00:30:54.785 17:39:13 -- pm/common@21 -- $ date +%s 00:30:54.785 17:39:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720798753 00:30:54.785 17:39:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720798753 00:30:54.785 17:39:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720798753 00:30:54.785 17:39:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720798753 00:30:54.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720798753_collect-vmstat.pm.log 00:30:54.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720798753_collect-cpu-load.pm.log 00:30:54.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720798753_collect-cpu-temp.pm.log 00:30:54.785 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720798753_collect-bmc-pm.bmc.pm.log 00:30:55.353 17:39:14 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:55.353 17:39:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:55.353 17:39:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:55.353 17:39:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:55.353 17:39:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:55.353 17:39:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:55.353 17:39:14 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:55.353 17:39:14 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:55.353 17:39:14 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:55.612 17:39:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:55.612 17:39:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:55.612 17:39:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:55.612 17:39:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:55.612 17:39:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.612 17:39:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:55.612 17:39:14 -- pm/common@44 -- $ pid=84215 00:30:55.612 17:39:14 -- pm/common@50 -- $ kill -TERM 84215 00:30:55.612 17:39:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.612 17:39:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:55.612 17:39:14 -- pm/common@44 -- $ pid=84216 00:30:55.612 17:39:14 -- pm/common@50 -- $ kill -TERM 84216 00:30:55.612 17:39:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.612 17:39:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:55.612 17:39:14 -- pm/common@44 -- $ pid=84218 00:30:55.612 17:39:14 -- pm/common@50 -- $ kill -TERM 84218 00:30:55.612 17:39:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.612 17:39:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:55.612 17:39:14 -- pm/common@44 -- $ pid=84241 00:30:55.612 17:39:14 -- pm/common@50 -- $ sudo -E kill -TERM 84241 00:30:55.612 + [[ -n 3777724 ]] 00:30:55.612 + sudo kill 3777724 00:30:55.621 [Pipeline] } 00:30:55.639 [Pipeline] // stage 00:30:55.644 [Pipeline] } 00:30:55.660 [Pipeline] // timeout 00:30:55.665 [Pipeline] } 00:30:55.682 [Pipeline] // catchError 00:30:55.687 [Pipeline] } 00:30:55.701 [Pipeline] // wrap 00:30:55.707 [Pipeline] } 00:30:55.717 [Pipeline] // catchError 00:30:55.724 [Pipeline] stage 00:30:55.727 [Pipeline] { (Epilogue) 00:30:55.740 [Pipeline] catchError 00:30:55.741 [Pipeline] { 00:30:55.753 [Pipeline] echo 00:30:55.754 Cleanup processes 00:30:55.760 [Pipeline] sh 00:30:56.043 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:56.043 84343 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:56.043 84613 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:56.056 [Pipeline] sh 00:30:56.338 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:56.338 ++ awk '{print $1}' 00:30:56.338 ++ grep -v 'sudo pgrep' 00:30:56.338 + sudo kill -9 84343 00:30:56.350 [Pipeline] sh 00:30:56.633 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:06.678 [Pipeline] sh 00:31:06.962 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:06.962 Artifacts sizes are good 00:31:06.977 [Pipeline] archiveArtifacts 00:31:06.985 Archiving artifacts 00:31:07.149 [Pipeline] sh 00:31:07.431 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:07.448 [Pipeline] cleanWs 00:31:07.458 [WS-CLEANUP] Deleting project workspace... 00:31:07.458 [WS-CLEANUP] Deferred wipeout is used... 00:31:07.465 [WS-CLEANUP] done 00:31:07.467 [Pipeline] } 00:31:07.490 [Pipeline] // catchError 00:31:07.505 [Pipeline] sh 00:31:07.788 + logger -p user.info -t JENKINS-CI 00:31:07.799 [Pipeline] } 00:31:07.817 [Pipeline] // stage 00:31:07.825 [Pipeline] } 00:31:07.844 [Pipeline] // node 00:31:07.850 [Pipeline] End of Pipeline 00:31:07.887 Finished: SUCCESS